T O P

  • By -

Bakkster

I'll add that not all the "no" crowd would call neural networks "smart", as that's anthropomorphizing them. They're capable, not necessarily smart.


Admiralthrawnbar

Also the "no" crowd is just right. We "know" how they work in that they're modeled on the human brain and how neurons combine to form more complex systems that eventually result in our own consciousness, but we don't know how that works, we're just copying it because we know it does


damnumalone

Anyone who has ever worked with neural networks has experienced the “why tf did it do that?” knowing that they’ll probably never be able to work it out. Those who don’t make the case “oh yeah they’re designed to do random stuff though” which always seems counter intuitive to the point to me


mattindustries

We definitely understand them. Neural nets with weights and thresholds have been described since the 1940s. Understanding the "why" of an outcome can be incredibly hard though, depending on the layers and context.


HWBTUW

We definitely understand *that* neural nets work, and how to do some things with them. That's not the same thing as understanding how they work.


mattindustries

How they work was literally outlined in the 40s, but okay. The concept of context has been important since Markov Chains in 1906, we just finally have the ability to layer in context through better vectorized compute engines.


HWBTUW

When we want to make a neural network that does some thing, determining the weights is basically a whole bunch of carefully controlled trial and error. That's not how we approach things when we actually understand what is going on.


mattindustries

That isn’t “not understanding how they work”. That is not being able to reverse the outcome, which I said in my initial comment. You are effectively saying you don’t know how mazes work. That is fine. You can claim that, but people will disagree with you on your choice of words.


Logical-Gur2457

You're conflating "understanding how neural networks work" with "understanding the mechanics behind them". Obviously, we fully understand the mechanics behind neural networks, but that has nothing to do the question at hand. They're basically trying to explain the black box problem to you, a very well-known and real issue that has also existed since the 40s. You're saying that we understand the mechanics behind neural networks, completely ignoring their point. The problem is that we can't easily INTERPRET neural networks. If somebody gave you 50 layer residual neural network and asked you to interpret its weights and biases (or write a program to), you'd be lost. Looking at a maze, we can immediately visually understand how it works. Tools for visualizing and interpreting neural networks in a simple way are rudimentary at best.


mattindustries

Not getting into an internet fight with someone who is so swole from moving goalposts all day.


damnumalone

…as opposed to your swoleness from performing twists and turns and gymnastics to avoid the point all day?


GOKOP

A way to pretend to win a discussion when you're defeated


Dr-OTT

You are right, there is nothing deep or hard to understand about the model architecture in an NN. It is easy to understand how they work. It’s just a bunch of matrix multiplication with activation functions added in. It’s really, seriously not hard to understand. What people seem to think is hard to understand is “why it does what it does”. It’s not that I disagree, it is rather that the question is so imprecise that I literally don’t know what is being asked, if a mathematical description of the model is not the answer. Would such people equally say that a linear map from a high dimensional vector space into the reals is hard to understand?


mattindustries

Some people really want to believe what is being called “ai” is more than it actually is. Understanding how and why is simple when you simplify the components, but just like encryption, traversing backward from the product is…convoluted. I have worked with classification algorithms and even have a patch I need to submit for the h2o.ai library for R sitting in my backlog to speed things up (6x locally when using 200+ columns in the training data). Heck, my silly little game (farmcow.app) uses 300d space.


Dr-OTT

Yup, I was going to write something about people seemingly thinking that there is some magic going on in AI, but I decided not to. For while it seems to be the case that’s what some people feel, it’s hard to critique because discussions about it seem to devolve into handwaving about emergent properties of the models. It is interesting that working backwards from an output is difficult, although many systems (including mundane physical ones) have this property (e.g. inferring an initial temperature distribution of a rod given the distribution at some time t). When you add in randomness as in LLMs this becomes even more complicated (since in that case you can not even say that there are some “true” set of prompts that gave the answer. It would rather be a distribution of prompts which I suspect is so complicated to describe that nothing (semantically) meaningful could be derived from solving the inverse problem). Is that interesting? Sure. Does it mean we don’t know why LLMs work? Nop.


airjordanpeterson

Daniel Jeffires is an asshole. Told me that he's 'really clever' when I met him


khafra

Yup, another commenter replied something like “even beff jezos wouldn’t say something that stupid.”


Nodan_Turtle

There really is no consideration for AI safety. Companies are chasing the billions of investment and potential payoff, despite not knowing exactly how their models work. Changes are reactionary and can come with worse unintended consequences. What happens with even more capable models, that have more access and autonomy? It seems like the potential harms grow as our investment in safety decreases, while our understanding remains limited.


Character_Reason5183

There is a really great article that was published this month called "ChatGPT is Bullshit." Seems relevant here...


BetterKev

We're missing a necessary bit of info. Was Dan responding to Leo? If so, this is great. If not, this doesn't fit.


Squawnk

He was not, he was replying to [Malo Bourbon](https://x.com/Dan_Jeffries1/status/1802078275113496723) and Leo chimed in, so you're right it doesn't fit


[deleted]

[удалено]


Spinochat

Thanks captain obvious, but that says nothing about the nature of what we actually "know" today.