Ask HN: What things are happening in ML that we can't hear over the din of LLMs?

aflip | 364 points

Some exciting projects from the last months:

- 3d scene reconstruction from a few images: https://dust3r.europe.naverlabs.com/

- gaussian avatars: https://shenhanqian.github.io/gaussian-avatars

- relightable gaussian codec: https://shunsukesaito.github.io/rgca/

- track anything: https://co-tracker.github.io/ https://omnimotion.github.io/

- segment anything: https://github.com/facebookresearch/segment-anything

- good human pose estimate models: (Yolov8, Google's mediapipe models)

- realistic TTS: https://huggingface.co/coqui/XTTS-v2, bark TTS (hit or miss)

- open great STT (mostly whisper based)

- machine translation (ex: seamlessm4t from meta)

It's crazy to see how much is coming out of Meta's R&D alone.

lelag | a month ago

NeRFS. It's a rethink of 3D graphics from the ground up, oriented around positioning glowing translucent orbs instead of textured polygons. The positioning and color of the orbs is learned by a NN given accurate multi-angle camera shots and poses, then you can render them on GPUs by ray tracing. The resulting scenes are entirely photo-realistic, as they were generated from photos, but they can also be explored.

In theory you can also animate such scenes but how to actually do that is still a research problem.

Whether this will end up being better than really well optimized polygon based systems like Nanite+photogrammetry is also an open question. The existing poly pipes are pretty damn good already.

mike_hearn | a month ago

One area that I would dive into (if I had more time) is "geometric deep learning". i.e) How to design models in a principled way to respect known symmetries in the data. ConvNets are the famous/obvious example for their translation equivariance, but there are many recent examples that extend the same logic to other symmetry groups. And then there is also a question of whether certain symmetries can be discovered or identified automatically.

angusturner | a month ago

I launched https://app.scholars.io to get latest research from arxiv on specific topics I’m interested in so I can filter out ones that I’m not interested. Hopefully it will help someone find research activities other than LLM.

postatic | a month ago

Anyone know anything I can use to take video of a road from my car (a phone) and create a 3D scene from it? More focused on the scenery around the road as I can put a road surface in there myself later. I’m talking about several miles or perhaps more, but I don’t mind if it takes a lot of processing time or I need multiple angles, I can drive it several times from several directions. I’m trying to create a local road or two for driving on in racing simulators.

ok_dad | a month ago

More like a cousin of LLMs are Vision-Language-Action (VLA) models like RT-2 [1]. Additionally to text and vision data they also include data from robot actions as "another language" as tokens to output movement actions for robots.

[1]: https://robotics-transformer2.github.io

beklein | a month ago

The SAM-family of computer-vision models have made many of the human annotation services and tools somewhat redundant, as it's possible to achieve relatively high-quality auto-labeling of vision data.

kookamamie | a month ago

Keep in mind that LLMs are basically just sequence to sequence models that can scan 1 million tokens and do inference affordably. The underlying advances (attention, transformers, masking, scale) that made this possible are fungible to other settings. We have a recipe for learning similar models on a huge variety of other tasks and data types.

hiddencost | a month ago

I was just going to ask a similar question recently. Ive been working on a side project involving xgboost and was wondering if ML is still worth learning in 2024.

My intuition says yes but what do I know.

wara23arish | a month ago

UW-Madison's ML+X community is hosting Machine Learning Marathon that will be featured as a competition on Kaggle (https://www.kaggle.com/c/about/host)

"What is the 2024 Machine Learning Marathon (MLM24)?

This approximately 12-week summer event (exact dates TBA) is an opportunity for machine learning (ML) practitioners to learn and apply ML tools together and come up with innovative solutions to real-world datasets. There will be different challenges to select from — some suited for beginners and some suited for advanced practitioners. All participants, project advisors, and event organizers will gather on a weekly or biweekly basis to share tips with one another and present short demos/discussions (e.g., how to load and finetune a pretrained model, getting started with GitHub, how to select a model, etc.). Beyond the intrinsic rewards of skill enhancement and community building, the stakes are heightened by the prospect of a cash prize for the winning team."

More information here: https://datascience.wisc.edu/2024/03/19/crowdsourced-ml-for-...

ron0c | a month ago

+1 to this, but one might be hard pressed to find anything nowadays that isn't involving a transfomer model somehow.

anshumankmr | a month ago

I wager the better question is

    What things are happening in fields of, or other than, CS that we don't hear over the din of ML/AI
antegamisou | a month ago

Seems like there is always push back on LLM's that they don't learn to do proofs and reasoning.

Deepmind just placed pretty high at International Mathematical Olympiad . Here it does have to present reasoning.

https://arstechnica.com/ai/2024/01/deepmind-ai-rivals-the-wo...

And it's couple years old, but AlphaFold was pretty impressive.

EDIT: Sorry, I said LLM. But meant AI/ML/NN generally, people say a computer can't reason, but DeepMind is doing it.

FrustratedMonky | a month ago

So, from the perspective I have within the subfield I work in, explainable AI (XAI), we're seeing a bunch of fascinating developments.

First, as you mentioned, Rudin continues to prove that the reason for using AI/ML is that we don't understand the problem well enough; otherwise we wouldn't even think to use it! So, pushing our focus to better understand the problem, and then levy ML concepts and techniques (including "classical AI" and statistical learning), we're able to make something that not only outperforms some state-of-the-art in most metrics, but often even is much less resource intensive to create and deploy (in compute, data, energy, and human labour), with added benefits from direct interpretability and post-hoc explanations. One example has been the continued primacy of tree ensembles on tabular datasets [0], even for the larger datasets, though they truly shine on the small to medium datasets that actually show up in practice, which from Tigani's observations [1] would include most of those who think they have big data.

Second, we're seeing practical examples of exactly this outside Rudin! In particular, people are using ML more to do live parameter fine-tuning that outwise would need more exhaustive searches or human labour that are difficult for real-time feedback, or copious human ingenuity to resolve in a closed-form solution. Opus 1.5 is introducing some experimental work here, as are a few approaches in video and image encoding. These are domains where, as in the first, we understand the problem, but also understand well enough that there's search spaces we simply don't know enough about to be able to dramatically reduce. Approaches like this have been bubbling out of other sciences (physics, complexity theory, bioinformatics, etc) that lead to some interesting work in distillation and extraction of new models from ML, or "physically aware" operators that dramatically improve neural nets, such as Fourier Neural Operators (FNO) [2], which embeds FFTs rather than forcing it to be relearned (as has been found to often happen) for remarkable speed-ups with PDEs such as for fluid dynamics, and has already shown promise with climate modelling [3], material science [4]. There are also many more operators, which all work completely differently, yet bring human insight back to the problem, and sometimes lead to extracting a new model for us to use without the ML! Understanding begets understanding, so the "shifting goalposts" of techniques considered "AI" is a good thing!

Third, specifically to improvements in explainability, we've seen the Neural Tangent Kernel (NTK) [5] rapidly go from strength to strength since its introduction. While rooted in core explainability vis a vis making neural nets more mathematically tractable to analysis, not only inspiring other approaches [6] and behavioural understanding of neural nets [7, 8], but novel ML itself [9] with ways to transfer the benefits of neural networks to far less resource intensive techniques; which [9]'s RFM kernel machine proves competitive with the best tree ensembles from [0], and even has advantage on numerical data (plus outperforms prior NTK based kernel machines). An added benefit is the approach used to underpin [9] itself leads to new interpretation and explanation techniques, similar to integrated gradients [10, 11] but perhaps more reminiscent of the idea in [6].

Finally, specific to XAI, we're seeing people actually deal with the problem that, well, people aren't really using this stuff! XAI in particular, yes, but also the myriad of interpretable models a la Rudin or the significant improvements found in hybrid approaches and reinforcement learning. Cicero [12], for example, does have an LLM component, but uses it in a radically different way compared to most people's current conception of LLMs (though, again, ironically closer to the "classic" LLMs for semantic markup), much like the AlphaGo series altered the way the deep learning component was utilised by embedding and hybridising it [13] (its successors obviating even the traditional supervised approach through self-play [14], and beyond Go). This is all without even mentioning the neurosymbolic and other approaches to embed "classical AI" in deep learning (such as RETRO [15]). Despite these successes, adoption of these approaches is still very far behind, especially compared to the zeitgeist of ChatGPT style LLMs (and general hype around transformers), and arguably much worse for XAI due to the barrier between adoption and deeper usage [16].

This is still early days, however, and again to harken Rudin, we don't understand the problem anywhere near well enough, and that extends to XAI and ML as problem domains themselves. Things we can actually understand seem a far better approach to me, but without getting too Monkey's Paw about it, I'd posit that we should really consider if some GPT-N or whatever is actually what we want, even if it did achieve what we thought we wanted. Constructing ML with useful and efficient inductive bias is a much harder challenge than we ever anticipated, hence the eternal 20 years away problem, so I just think it would perhaps be a better use of our time to make stuff like this, where we know what is actually going on, instead of just theoretically. It'll have a part, no doubt, Cicero showed that there's clear potential, but people seem to be realising "... is all you need" and "scaling laws" were just a myth (or worse, marketing). Plus, all those delays to the 20 years weren't for nothing, and there's a lot of really capable, understandable techniques just waiting to be used, with more being developed and refined every year. After all, look at the other comments! So many different areas, particularly within deep learning (such as NeRFs or NAS [17]), which really show we have so much left to learn. Exciting!

  [0]: Léo Grinsztajn et al. "Why do tree-based models still outperform deep learning on tabular data?" https://arxiv.org/abs/2207.08815
  [1]: Jordan Tigani "Big Data is Dead" https://motherduck.com/blog/big-data-is-dead/
  [2]: Zongyi Li et al. "Fourier Neural Operator for Parametric Partial Differential Equations" https://arxiv.org/abs/2010.08895
  [3]: Jaideep Pathak et al. "FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators" https://arxiv.org/abs/2202.11214
  [4]: Huaiqian You et al. "Learning Deep Implicit Fourier Neural Operators with Applications to Heterogeneous Material Modeling" https://arxiv.org/abs/2203.08205
  [5]: Arthur Jacot et al. "Neural Tangent Kernel: Convergence and Generalization in Neural Networks" https://arxiv.org/abs/1806.07572
  [6]: Pedro Domingos "Every Model Learned by Gradient Descent Is Approximately a Kernel Machine" https://arxiv.org/abs/2012.00152
  [7]: Alexander Atanasov et al. "Neural Networks as Kernel Learners: The Silent Alignment Effect" https://arxiv.org/abs/2111.00034
  [8]: Yilan Chen et al. "On the Equivalence between Neural Network and Support Vector Machine" https://arxiv.org/abs/2111.06063
  [9]: Adityanarayanan Radhakrishnan et al. "Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features" https://arxiv.org/abs/2212.13881
  [10]: Mukund Sundararajan et al. "Axiomatic Attribution for Deep Networks" https://arxiv.org/abs/1703.01365
  [11]: Pramod Mudrakarta "Did the model understand the questions?" https://arxiv.org/abs/1805.05492
  [12]: META FAIR Diplomacy Team et al. "Human-level play in the game of Diplomacy by combining language models with strategic reasoning" https://www.science.org/doi/10.1126/science.ade9097
  [13]: DeepMind et al. "Mastering the game of Go with deep neural networks and tree search" https://www.nature.com/articles/nature16961
  [14]: DeepMind et al. "Mastering the game of Go without human knowledge" https://www.nature.com/articles/nature24270
  [15]: Sebastian Borgeaud et al. "Improving language models by retrieving from trillions of tokens" https://arxiv.org/abs/2112.04426
  [16]: Umang Bhatt et al. "Explainable Machine Learning in Deployment" https://dl.acm.org/doi/10.1145/3351095.3375624
  [17]: M. F. Kasim et al. "Building high accuracy emulators for scientific simulations with deep neural architecture search" https://arxiv.org/abs/2001.08055
babel_ | a month ago

Alpha fold seems like a major medical breakthrough

dartos | a month ago

There was a lot of research into game-playing prior to LLMs (e.g. real-time strategy). Is there nothing left to conquer there now? Or is it still happening but no one reports on it?

TheDudeMan | a month ago

This is a nice daily newsletter with AI news: https://tldr.tech/ai

svdr | a month ago

A novel SNN framework I'm working on. Newest post has been taking me a while. metalmind.substack.com

chronosift | a month ago

Is there anything cool going on in animation? Seems like an industry that relies on a lot of rote, repetitive work and is a prime candidate for using AI to interpolate movement.

publius_0xf3 | a month ago

To plug my own field a bit, in material science and chemistry there is a lot of excitement in using machine learning to get better simulations of atomic behavior. This can open up exciting areas in drug and alloy design, maybe find new CO2 capturing material's or better cladding for fusion reactors, to name just a few.

The idea is that to solve these problems you need to solve the schrodinger equation (1). But the schrodinger equation scales really badly with the number of electrons and can't get computed directly for more than a few sample cases. Even Density Functional Theory (DFT), the most popular approximation that still is reasonably accurate scales N^3 with the number of electrons, with a pretty big pre factor. A reasonable rule of thumb would be 12 hours on 12 nodes (each node being 160 cpu cores) for 256 atoms. You can play with settings and increase your budget to maybe get 2000 (and only for a few timesteps) but good luck beyond that.

Machine learning seems to be really useful here. In my own work on aluminium alloys I was able to get the same simulations that would have needed hours on the supercomputer to run in seconds on a laptop. Or, do simulations with tens of thousands of atoms for long periods of time on the supercomputer. The most famous application is probably alphafold from deep mind.

There are a lot of interesting questions people are still working on:

What are the best input features? We don't have any nice equivalent to CNNs that are universally applicable, though some have tried 3d convnets. One of the best methods right now involves taking spherical harmonic based approximates of the local environment in some complex way I've never fully understood, but is closer to the underlying physics.

Can we put physics into these models? Almost all these models fail in dumb ways sometimes. For example if I begin to squish two atoms together they should eventually repel each other and that repulsion force should scale really fast (ok maybe they fuse into a black hole or something but we're not dealing with that kind of esoteric physics here). But, all machine learning potentials will by default fail to learn this and will only learn the repulsion to the closest distance of any two atoms in their training set. Beyond that and the guess wildly. Some people are able to put this physics into the model directly but I don't think we have it totally solved yet.

How do we know which atomic environments to simulate? These models can really only interpolate they can't extrapolate. But while I can get an intuition of interpolation in low dimensions once your training set consists of many features over many atoms in 3d space this becomes a high dimensional problem. In my own experience, I can get really good energies for shearing behavior of strengthening precipitates in aluminum without directly putting the structures in. But was this extrapolated or interpolated from the other structures. Not always clear.

(1) sometimes also the relativistic Dirac equation. E.g. fast moving moving atoms in some of the heavier elements move at relativistic speeds.

dmarchand90 | a month ago

I'm just a touch disappointed that this thread is still dominated by neural-network methods, often that apply similar architectures as LLMs to other domains such as vision transformers.

I'd like to see something about other ML methods such as SVM, XGBoost, etc.

PaulHoule | a month ago

featup

king_magic | a month ago
[deleted]
| a month ago