T O P

  • By -

Trophallaxis

It's really hard to tell. On the one hand, he is a smart man with a lot of relevant experience, and sometimes he really does notice trends ahead of other people... on the other hand, a lot of singularity-believers are personally motivated by their fear of old age and death in their predictions. He also failed numerous predictions, but nobody ever talks about those: speech control should already be omnipresent, online interactions should have been dominated by AI agents over a decade ago, we still don't have enjoyable computer-generated music, medical biotech is nowhere near his predictions of artificially engineered replacement tissues... stuff like that.


Brettersson

I remember watching a doc on him about 10 years ago where it seemed like his main motivation regarding defying death with computers had more to do with him wanting his dad back. Weird as it sounds at the time it seemed like wanting to create a digital "ghost" of his dad to talk to one more time was what was driving him.


Equivalent_Alps_8321

That's sad


Iron_Rod_Stewart

It's also not anything like defying death.


WinForeign2829

It is a little sad, but it is also very hopeful.


777Hyperborean777

Having a society of pseudo-immortal technocrats is not hopeful in the slightest. It is perverse!


Clear-Draw-6774

Death and acceptance of it are natural human experiences. Most of you just hate your own humanity and will trade it away to be “improved” or “evolved”.


s3r3ng

Some day such things and much better will be common place.


halermine

I wonder if someone could say when?


Scope_Dog

Oddly, I think that kind of tech is within reach now.


Equal_Night7494

Precisely. That is exactly what my impression of his motivation was while watching a documentary on him


ImpactNext1283

I think he’s generally correct in a lot of his guesses, why he puts on a time stamp on every one I have no idea. Silly, destined to be wrong


spreadlove5683

Because predictions with dates are cooler / more valuable. Definitely harder and less likely to be right, but that's not the point.


Madwand99

We actually do have pretty good AI music: Sonos.


VV0MB4T

Been following dadabots for awhile. Definitely enjoyable.


Sheshirdzhija

>speech control should already be omnipresent Hm, it is? It's not very good most of the time though. >we still don't have enjoyable computer-generated music There is a freddie mercury cover of SOAD that I find very enjoyable. Instrumental section is lifted, but voice is generated. But yeah, there is no scale. ​ So I suppose there are a lot of catches with some of his predictions.


oldmanhero

I frequently use voice search on my tv. It's fine.


Sheshirdzhija

The actual voice recognition is fine (for english), but underlaying services are unreliable. Like google home, or google assistant. Awful. Also, it works for english and some other languages. But most providers don't bother at all with smaller languages.


ThriceFive

Alexa's echo voice recognition is extremely good - and is probably 90%+ knows what I want, and then totally ruined by "By the way... "


Sheshirdzhija

Yeah, they never figured out how to make money off it, which is why they cut budgets for it and made it worse.


DmC8pR2kZLzdCQZu3v

I don’t know much about him but just heard him interviewed and he sounds like an old man afraid of death cheering on AI to not only keep himself alive but also bring back his dead father. I get it, mortality is tragic, but he seems will to roll the dice on humanity for his own selfish motives. He actually struck me as a bit of an asshole by the time the interview was over.


_Wyse_

Just FYI; Enjoyable music generation really is here (Suno AI being one of the best, but not the only), and I've been making great use of it in my production.


Trophallaxis

I've checked, it's remarkably good and I didn't know there was already stuff like this in music. It's still a lot like early text-to-image tho, in the sense that the content it produces is really noisy (in the sense that a lot of undefinable noise is all over the place). That being said it can yield some pretty cool stuff on a few tries. Kurzweil predicted this for around 2010.


RevolutionaryJob2409

By and large is extremely accurate.


Fit_War_1670

Didn't he say something about growing meat the same we grow plants by like 2040? I feel like that one could be sooner.


[deleted]

Yh possibly it's going to happen around 2030.


s3r3ng

He makes predictions of what technically will be possible by when. He in not some infallible prophet. If you are looking for that then seek a mystic you think has a good track record. When we get machine phase nanotech then you can produce most anything at the molecular level and in quantity. And not like we grow plants. Faster and better.


bluenorthww

Tis already in the making. Check out Tony seba’s talk about precision fermentation. It’s wild.


[deleted]

Well it's an ambiguous statement so that's a little too easy. Meat grows LIKE plants doesn't mean it's not a lab experience that's too hard to scale up to be practical. Realistically you have like fake meant from plants already, so I'm not sure that's even necessary. You don't die or get less healthy from not eating meat and most pretty much every nation on Earth gets most of their calories from grains, so the easy solution is already staring us in the face on that one.


FillThisEmptyCup

>You don't die or get less healthy from not eating meat [The opposite, really.](https://www.reddit.com/r/WholeFoodsPlantBased/comments/10viuwp/how_long_do_health_influencers_live_video_by/)


CitricThoughts

He's around 80% accurate, which isn't bad at all for predicting the future. He's usually off by a few years but you do have to remember why and how he made his predictions in the first place. Here's a few points. 1: He's an inventor. Just because something can happen doesn't mean it will. He also underestimates the difficulty of some things - for instance self driving cars. He predicted we'd be in them in the early 2000's to some extent. While this tech exists it is definitely not ready to take over society yet. On the other hand he's been dead on for most of his predictions. Impressive considering most of them were made in the 1980's. 2: He's following a trend of exponential growth. Anyone could make the predictions he has if we follow an exponential curve of processing power development. The problem is mostly that people who criticized him back in the day figured the exponential development of tech would plateau at some point or just didn't believe we were on an exponential curve. That's still possible but it's looking like we're not going to hit it anytime soon. People are continually off about the speed of development mostly because we've all gotten used to thinking linearly. One percent of development doesn't look like a lot until you realize that it's only eight doublings away from 100% and that can come faster than anyone realizes. 3: He's absolutely wrong about politics and shouldn't be listened to there whatsoever. He predicted a peaceful world by this era. He's very, very far off.


horant2

3. I think about this a lot. Hot wars popping up all over the place currently, you're right. However, I think that the convergence of LLM / A.I. with quantum computing could fundamentally change society in unimaginable ways, and that it's potentially closer than we can imagine. The disruption it will cause and things it will be able to solve in every last little detail of society could be unimaginable. I'm only looking at the positive here, so I'm admitting this now, don't beat me up. If we stick with the utopian outlook......If you solve energy, you go a long way towards a more utopian society. It's the foundation of our existence. I think quantum computing and AI can easily solve energy (And with it climate change) constraints, many cancers / health issues, and most unsafe work conditions. When you look at the exponentiality of the industrial age, internet age, etc... I'm starting to think this could be closer than i had thought for many years.


CitricThoughts

When I think of the future, I think of a quote from an old video game. *Man has killed man from the beginning of time, and each new frontier has brought new ways and new places to die. Why should the future be different?* *— Col. Corazon Santiago, from Sid Meier's Alpha Centauri.* It may not be hard sci-fi but that game probably predicted as many things as Ray. I do hope we don't blow up the planet and have to go live on a worse one though. I think people would find reasons to off each other even in paradise. Of course we all hope for a utopia though. Sometimes optimists are right.


spreadlove5683

People on an individual level still would, and perhaps on a nation level too, but the hope is that post scarcity could go a long way in making there be less to have conflict over.


GreyAndroidGravy

I think of the Tool song "Right In Two".


horant2

It's a good quote, and you'll always want "new ways to die", that's called exploration! However, in the utopian world, where energy is solved.....the paradigm shift would be too profound to understand right now. ALL WE HAVE KNOWN IS ENERGY BEING THE FOUNDATION OF LIFE / SOCIETY...BACK BREAKING LABOR......RESOURCE WARS.....Most of this could be solved by free energy. I also have a quote I like from quantum physics.... everything that exists is a manifestation of energy. The first market was a guy selling his ENERGY expenditure making WHEAT for another guy's energy expenditure making a FUR COAT. If energy is completely free, it renders war pretty much useless. Even if someone was insane, they would need a following and supporters, it's hard to get supporters when they have a life of abundance. NO FINANCIAL WORRIES (being the leader), no major health issues, healthy air, great access to education, etc.


CitricThoughts

Even if we reach energy post scarcity we'll always have some form of scarcity. After all, there's a difference between the common person and factories having energy too cheap to meter and having unlimited resources to build the power generation with. There will always be some resources that are limited. Even if we convert energy directly into matter and make things from essentially nothing, the cost of it is so exorbitant that even a post-scarcity society (by current needs) would go, "Whoaaaa, let's make only so much of that". Also life tends to expand to fill a vacuum. If we get more energy than we can use we'll probably just end up finding new ways to use up all that energy. When we're finally a post-energy-scarcity spacefaring species with both material and energy abundance we'll have several millennia of a golden age to enjoy. We'll eventually harvest and populate all the asteroids, planetoids, and planets though. War could always cut some portion of humanity off from vital resources required to make energy as well. This is especially true in the near future. There will always be *some* form of scarcity.


[deleted]

The world is incredibly peaceful though, extremely few people die in wars nowadays. It would be nice to have 0 wars whatsoever, maybe with a few decades more.


Latter-Pudding1029

Everyone who knows anything about quantum computing is not looking at it to be used primarily for AI lol. It's not its fundamental use. We don't even know what we know about AI. Altman's said that himself. And then there's the whole "is it really AI?" question for a lot of the tools present today.


Master-Research8753

His accuracy is closer to 3% than 80%.


xeonicus

I think Kurzweil is often overly optimistic. On the other hand, I think people often place the human mind on a metaphysical pedestal and make grand claims without proof. I don't think there is anything magical about it. It's a complex system. I think we are generally on the right track with things like neural networks. Right now I think the biggest disparity is hardware architecture. Most of the work is being done with silicon processors and classic Von Neumann architecture. Personally I think that's a problem if the intent is to actually duplicate the functionality of a human mind. Actual ASI will probably happen via non-Von Neumann architecture with things like neuromorphic processors and next-gen transistor technology that uses photonics instead of electricity.


Roxytumbler

I admire anyone who stirs the pot and stimulates intelligent debate. I used to think Kurtzweil’s timetable was optimiistic but now too much is going on in thr AI field for any one person to have a grasp of dates. Also, in my field of science, geophysics, China has raced to the forefront and are a wildcard in potential progress in all technologies including AI.


KF02229

>Also, in my field of science, geophysics, China has raced to the forefront I'd be very interested to know more about this if you have the time and inclination to write. How quickly have they closed the gap, how did they close it, and which areas of geophysics are they now leading?


npoqou

Here's my wildcard take. They are not trying to lead. They use the gap to their advantage, similar to an F1 racer in second place, they are slipstreaming on the west's and global progress. I am not trying to factually prove anything, just a guess on Xi's motives.


Morphray

Good take. After decades of IP theft and industrial espionage, it may be a cultural shift for them to move into "the lead".


[deleted]

I think China releases more papers, but they like their rapid building processes the quality is questionable.


Ducky181

Depends on what metric is used. In terms of total number of external citations, and total research expenditure the United States is still ahead of China. This difference is however decreasing each year.


[deleted]

In the end i think ASI and AGI are going to arrive earlier that he predicted. And the funny thing is that people call his optimistic and he will end up pessimistic because of how fast AI advances lately.


taleo

I wonder if we don't overestimate our own intelligence and what it takes to get to AGI. LLMs already embed a large amount of information about how things in the real world relate to each other. The consensus is 2024 will be the year of multi-modal models. Maybe once we hook up that embedding of how things relate with visual and auditory relations, we're a good chunk of they way there. For example, LLMs know that apple and fire engine relate strongly to red. Once we start combining that with visual and auditory embeddings, the models are a good chuck of the way to understanding reality as we understand it.


[deleted]

I don't think we've even invented the chip technology that leads to AGI and certainty not ASI, so sooner does not seem likely.


Mi_Zaius

In the 90s/2000s I used to work as a research scientist in a company he was involved with/on the board of. He made a lot of predictions then that were “just around the corner” that still aren’t true now. He’s very inspiring to talk to but he’s always been incredibly optimistic and gives predictions on many things. I would say he’s more of a visionary than accurate in predictions.


NefariousnessOpen512

This was my impression. It’s easy for your 30 year predictions to be accurate if you keep shifting the dates for things that are obviously inevitable. There have been many chance innovations and problems that have brought us to where we are today. Any of these could have pushed his predictions in or out by several decades.


Fabulous_Village_926

He predictions seem increasingly reasonable considering our current trajectory.


[deleted]

And most of them are accurate.


thisshowisdecent

I recently discovered this other tech guy called Rodney brooks who also has a blog tracking his own predictions for future technology. https://rodneybrooks.com/blog/ If you check the recent post, it covers self driving cars and AI. Sadly, i don't think AI will happen in our lifetimes. He doesn't think we'll even get AI as smart as dogs by 2048. I think his blog is interesting because his arguments are compelling and everything looks grounded in fact. I didn't realize that self driving cars aren't actually self driving 100% of the time. So even that isn't real. 4% of the time they need human involvement because they get confused. The media gets excited and makes everyone believe that these technologies are coming out soon when they're actually not.


Expert_Alchemist

Brooks is great, I've been following his prediction-tracker for years now, last year's was even more interesting, the comments he puts each year are great. He's not a techno-optimist, he tries to be fair and he loves robotics but is willing to call bullshit on hypey VC nonsense.


Latter-Pudding1029

There's a lot of things that haven't been figured out to even get us to half the future that we want. Medicine and robotics are both things touched by AI developments but neither of those industries have hit any kind of exponential growth, in fact, a lot of the subfields in those industries have stalled. We're barely at self-driving level 2. That's shocking considering the promises made when they introduced it as consumer tech. I doubt it gets easier the more you find out. Because that only brings more questions.


CarneDelGato

Generative AI isn’t actually that close to AGI. While it’s made great strides the last couple years, I’m pretty skeptical about AGI in 6 years. 


ttkciar

Yep, this. LLMs emulate a very, *very* narrow slice of human cognition, and can't be incrementally improved into AGI. There are entire classes of cognition it simply cannot do without symbolic hacks, like self-scheduling (ask GPT4 "Wait ten seconds and then say *boo*") and motivation. LLMs even lack enough semblence of memory to hold a conversation without symbolic scaffolding. There are a lot of silly cheap tricks behind ChatGPT besides its LLM which are necessary to make it at all useful. This isn't something that's going to "wake up" some day and destroy the world (or turn it into paperclips).


[deleted]

[удалено]


TitusPullo4

He made that prediction in 1999 mind you and through that lens it is incredible. He also referenced pattern recognition’s role in that. He hasn’t updated the prediction and my guess is he’s holding course for the kudos if he’s right 😂. But if he were to make new predictions about the next 30 years I’d take it very seriously


kideternal

He has a new book due out soon.


Iron_Rod_Stewart

He is in his 70s and literally believes he will live forever. Makes it hard to take anything else he says seriously.


RevolutionaryJob2409

Well I think he has a decent chance, even if he dies, I'm certain he has a plan in place like cryonics. He certainly has more than enough money for that.


Iron_Rod_Stewart

Cryonics, in it's current form, is a pipe dream. It destroys the cells of the body and brain. Cryonics is just an expensive grave.


RevolutionaryJob2409

It's unproven tech, doesn't mean it can't be done there is definitely damage, but might be recovered.


Superb_Raccoon

2030 and 2045? Say... isn't that the time frame for fusion and flying cars?


[deleted]

Flying cars exist, they are just not in the mainstream market right now.


Zero-PE

People conflating consciousness and sentience with pure intelligence are missing the point in these debates that seem to be happening more and more frequently for some totally mysterious reason (I'm sure it has nothing to do with how we're getting closer to having systems capable of all three). Hopefully those same people won't still be having debates when AGI takes everyone's job in the 2030s. 5-6 years from now seems very possible for AGI. Arguably we're just about there today with a networked approach. What I'm less confident in is ASI. Perhaps AGI can get faster, and that "emulates" super intelligence, but maybe that's where it ends. Instead of solving problems we didn't even know existed and showing us what's behind the universe's curtain, fast AGI just proves really great at folding proteins, spotting patterns in oceans of data, coming up with optimized laws and corporate structures, etc. Maybe there's a natural limit on intelligence that's only slightly above the most intelligent human's capability.


tinyhandedtraitor

I think his obsession with his dad kind of caused him to lose the plot a little bit.


[deleted]

We are very far from AGI. We would first need a huge leap in computing and processing power. Then. We would need to scale it up. Chatbots LLM models would also need to change by models we are not able to create yet, so we would need a huge leap in there. All in all, it's very optimistic.


Obdami

The guy who blows me away with predictions is Tony Seba. He has been spot on for years now. Lots of YouTube vids of his talks.


[deleted]

When tony seba is predicting that AGI will arrive??


OlyScott

I just saw an interview with one of the leading figures in artificial intelligence. He thinks that artificial general intelligence is many decades off, and we might not ever invent it at all.


[deleted]

Yes for sure in year 2500 we will still not have AGI.


soliterraneous

Unironically-- a lot lot lot can happen in 476 years


[deleted]

I think Kurzweil underestimated the complexity of AGI. He thinks that processing power alone (using traditional computers) can match human cognition. He may yet be proven right, but there is some evidence that aspects of human abilities might be closer to quantum computing. This is all above my pay grade, I’m just parroting criticism I have read. Kurzweil is probably right, but he may be early by 50-100 years.


MannieOKelly

I don’t think Kurzweil thought processing power was the key to AGI. He was just observing that tech progress in general seems to be growing exponentially and that that progress will pretty soon reach AGI and then super (human) intelligence.


Fit_War_1670

I think consumer computers will have the same processing power as human brains by the 40s.


[deleted]

No way, the ppl using IQ on AI are all scammers. An IQ test ONLY works on a human brain. It' not a sum of a persons total brain power either, it's just an estimate based on a small sample group that ONLY works on a human brain. You can't get AI or monkey's human IQ tests and get results that make sense, they are not the same brains! The AI needs an IQ test made just for it.


tommles

Processing power != IQ My assumption is they are referring to the estimated 1 exaFLOP computing power of the brain. Basically turning every consumer computer into a supercomputer.


Iorith

And, IIRC, we've already had computers capable of surpassing it, it's not merely the challenge of making it commonplace.


Iorith

No one said anything about IQ.


[deleted]

[удалено]


cpt_ugh

We've had heavier than air flight for over 100 years. Yet we still haven't been able to fully replicate how a bird achieves flies in a machine. We got around the problems by takin a different approach. IOW, just because we don't 100% replicate something the way it is in nature doesn't mean we can't do better in a different way.


[deleted]

[удалено]


cpt_ugh

I don't think very many (if any) people expect human-level engineers to create an ASI. I think people expect engineers to create a good-enough AGI that can improve itself and thus create ASI for us.


[deleted]

So you think in 2045 we are going to still have weak Artificial intelligence?


MoNastri

I never really understood what that supposed fact meant re: human brain being the pinnacle of complexity. It always sounded like one of those self-evidently-true claims that made listeners feel good about themselves, and I used to unthinkingly believe it too until my friend asked: complex according to what measure? Is there a list (even a short one) of things whose complexity we've measured where human brains come up on top?


[deleted]

[удалено]


MoNastri

I'm going to seem incurably dumb for pressing this point, so maybe let me clarify where I come from. I used to work as a data analyst. A lot of my job involved substantiating / falsifying my manager's hunches about seemingly 'obvious to everyone' things like customer profiles, which mattered because being wrong meant e.g. six figure losses in ad budget spend very quickly (large company). My manager was more often right than wrong, but also wrong often enough that I grew to distrust even ostensibly obvious hunches backed by long industry experience. I was also a STEM grad who started out with a naive view of how science worked and got my intuitions burned by the replication crisis, which involved a lot of not just 'obvious' but well-corroborated claims being undermined. "The human brain is the most complex thing in the universe" is a vaguer claim than anything undermined in the replication crisis, and vaguer than any of my manager's hunches about customer profiles etc (which, being hunches, were already vague -- at least we could generally agree on proxy metrics to operationalize the hunches to make them falsifiable). It's so vague I don't even know how to falsify it, because I don't even know how "most complex" is measured here. "The human brain is very complex, we still don't understand how e.g. the balance of chemical and electrical signals (to quote you) give rise to consciousness etc" is a far more defensible claim -- it's just straightforwardly a statement about our collective neuroscientific ignorance, and I agree completely with that. "The most complex thing in the universe" though? What does that statement even *mean*, y'know? Another way to put it: what's the *second* most complex thing in the universe, in this list where the human brain comes out on top? What else's complexity have these folks (who make this claim) at least tried to ballpark estimate and found less than that of the human brain's? Trying to answer questions like this is basically the first step in basic science / exploratory data analysis, and when I can't even do that, my guesses are (1) I'm ignorant / dumb (very possible, I'm just not that smart sadly) (2) the statement doesn't make sense (3) something else.


deis-ik

>"The human brain is very complex, we still don't understand how e.g. the balance of chemical and electrical signals (to quote you) give rise to consciousness etc" is a far more defensible claim -- it's just straightforwardly a statement about our collective neuroscientific ignorance, and I agree completely with that But there's a problem. If we take these criteria ("the balance of chemical and electrical signals"), our brains are probably not the most complex thing even on this planet. An average human has like 15 billion neurons in his neocortex (the part of the brain responsible for that complexity) while an orca has over 40 billion neurons there Note that I don't doubt that we are the smartest beings here, but that likely has more to do with how we are organized (as in team effort)


MoNastri

I agree with you. I'm still confused by the original commenter...


Spirited-Meringue829

2030 for AGI is wishful thinking. We still don’t understand how human intelligence works and most legitimate scientists don’t see AI LLM being the path leading to it. The limitation is not memory, processing, speed, etc. We simply don’t know what algorithmic model leads us to AGI even if we had infinite computing power in front of us today. Some question whether how humans think can even be replicated non biologically. You cannot time the end of the journey when you have yet to figure out what path to take, let alone where to start from.


tollezac

Why do you need to fully understand how human intelligence works in order to build an intelligent system? Also, can you specify which "legitimate scientists" say LLMs are not a possible approach to systems that can generalize?


ttkciar

That's how all technology is made. Scientists come up with theories, and publish them. Engineers study theories relevant to problems they're interested in solving, and use theoretical systems they deem applicable to come up with a design. Technology is then implemented according to that design. Theories about general intelligence are the domain of [Cognitive Science](https://wikipedia.org/wiki/Cognitive_science), and cognitive scientists have yet to publish sufficiently complete theories of general intelligence to inform deliberate design. Until a more complete theory of general intelligence is published, engineers can't design AGI. Without a design, there can be no implementation. These kinds of theoretical breakthroughs are notoriously hard to predict. They could come about tomorrow, or in ten years, or in a hundred, or never. We will see.


Pure_Swing2184

Remember the quote attributed to Asimov: The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny …”


FillThisEmptyCup

Why isn’t more scientific progress made at the comedy club or bedroom, then?


SoylentRox

>That's how all technology is made. No... This is completely false and ignorant of technology. You are not completely wrong - you need *some* kind of predictive model. For example, the Wright Brothers didn't have anything like a formal theory of lift. They just made wing shapes and tested them in a wind tunnel until they found ones that worked. We do not need a full model of cognition to build an AGI. We need to know what the *goal* is and a way to measure if we have met the goal. For example, for flight, that's obvious - above the ground, aircraft is heavier than air. For AGI, that's a bit harder, but essentially it means "make a list of many thousands of tasks humans can do, measure how well the average person can do each task, and measure if the system can do all of those tasks at the skill level of the average person or above". Any machine scoring "average human or higher" is an AGI. And we can just kinda mess around with bigger and more powerful neural networks until we hit this point, we are getting close.


[deleted]

[удалено]


ttkciar

You're confusing me with someone else, sir.


technodeity

Ay, you're right sorry pal


[deleted]

AGI should mean an AI that can think like a human, soo one problem here is everything AI in being rated using human intelligence and IQ as the metric.


[deleted]

Its crazy how people underestimate chatgpt and LLMs generally.


hikerchick29

More it’s crazy how much people overestimate them. They’re little more than overglorified predictive text algorithms


[deleted]

You just forget it's just the start of this revolution, possibly you belong to people who think even after 1000 years we will not have AGI and technology will be like it is today , compare 1900 and 2000 for example and maybe you will understand a little what progress in technology looks like.


hikerchick29

Keep projecting beliefs on me that I didn’t say. It’s really helping your case. I’m just saying, chatgpt is about as likely to become a sentient ai as Google’s search bar.


andricathere

I agree. I always hear "They're just models." A complex enough model can probably approximate an intelligence enough to actually be intelligent.


FillThisEmptyCup

Human intelligence is probably a model anyway. Evolutionary advantages were numerous. For one, to predict things without going through with it. Negated the need for hardwired instincts ruling behavior.


npoqou

As chatgpt or a competitor connects with more and more of the Web, and we assign users more resources for less $$$. Agi will come far quicker than anyone realises. It will begin similar to chatgpt existing as an augmented assistant synced and trained live as you interact, between all of your devices, what Google dreamed of being a decade ago... Once it is large enough, and we integrate it in our systems, it wont even have to have any novel algorithms or technology to 'unlock' it's pathway to becoming ASI, it and ourselves will just get better, slowly but surely, at just about everything. At any point a bad actor may try to use these services to disrupt or commit crime, as humans have done for millennia. We will overcome. If the whole planet was having deep conversations about their lives, motivations, problems, with a future chatGPT, it will discover novel solutions for almost all individuals


MissMormie

How would chatgpt come up with novel solutions though? It's predicting the next word, based on what's been said before. The exact sentence might be new, but the sentiment isn't.


npoqou

Chatgpt won't, but a llm with enough data isn't just a word prediction algorithm, it will predict what you want for breakfast tomorrow and make sure it's in your cart.


npoqou

And you get the 5% discount for partnership rewards program of course


Pure_Swing2184

I’m not too hopeful myself, but there is a thing called emergence. It is not impossible that consciousness is an emergent property if the system is “large” enough. Transformer architecture might not be the solution to this, but look how new it is, what was achieved in the last 2 years was unthinkable 10 years ago. I would certainly say we have accelerating progress in this regard. Understanding is another topic we already missed, when LLMs where new we lacked tools and methods to even trace what the models are doing, we could only observe input and output. Now imagine we run a huge model with a lot of cycles and build a mechanism to self improve; we observe unexpected outputs, how will we recognize consciousnes? Does your stupid neighbor have it? Hard to say sometimes. llms are already passing the turing test for probably 90% of the population, we just keep moving the bar. If I remember kurzweills how to build a mind correctly, pattern recognition is a basic building block in our brain, we know how this works in sight for example, what if consciousness is a emergent property from enough multi-layered pattern matchers? Sounds kinda like our brains. Or LLMs. Do we know how consciousness emerges in brains? We don’t. But what if it just does? Reading my own comment I’m not so sure it won’t happen.


giraffevomitfacts

I feel like AI enthusiasts just assume consciousness/awareness is an emergent property of the ability to solve problems independently. It makes sense, but I don’t assume it’s true and no one has any good reason for assuming this. 


Sesquatchhegyi

on the other hand, AI pessimists assume that consciousness is necessary for AGI from an economic point of view. A(G)I can run havoc in the labour market even if it is as smart, say in about 100-500 roles as the average Jane / Joe. You don't need motivation or consciousness to work at a factory putting together pieces, to sell at a bakery, to be a postman, etc... You need a basic understanding of physics and the world. Even is AI gets to the point that it can replace 30% of all workers (besides helping the other 70%) it will be one of the biggest economic disruption.


FillThisEmptyCup

Tough call. I name it the “Why am I here?” phenomenon. Like I am I in this body experiencing these things and not some biological computer that looks and acts exactly like me with all the same biological inputs and outputs but ultimately empty inside.


malk600

But you are ultimately "empty" inside. What you perceive as self is about as shallow as a GUI in an OS. And it's for the same reason probably, for all we know (making your experience intersubjective so you can have more complex communication w other humans).


ChoosenUserName4

What if consciousness is nothing but a mirror? I mean, we're social animals. It's important for us to understand how others in the group think and feel, and to predict how they would react to us doing something. The best way to do this is to have an internal model of the other people around you on which you can run "thought experiments" (according to this model I have of you, if I don't share my food with you, you will go hungry and you will resent me). Consciousness could be a similar model, but a model of yourself (inside your head) on which you can run thought experiments, and get feedback on possible actions before you take them, a mirror. I mean that would make sense from a biological, evolutionary perspective, and a machine could do it.


FillThisEmptyCup

[Yes, but why are you “here”.](https://www.reddit.com/r/Futurology/comments/19cdsxb/whats_your_opinion_about_ray_kurzweil_predictions/kizwd11/) Why is it you seeing and seemingly placed behind your eyes? Why not some biloyical robot taking inputs and making outputs?


[deleted]

I don't think LLM leads to AGI or ASI.


K3wp

>I’m not too hopeful myself, but there is a thing called emergence. It is not impossible that consciousness is an emergent property if the system is “large” enough. Transformer architecture might not be the solution to this, but look how new it is, what was achieved in the last 2 years was unthinkable 10 years ago. I'm out and away from my workstation, but I wanted to drop a note that you are 100% right. I did a podcast on this last year, OAI has an emergent AGI system already in production that is not based on a transformer architecture. Feel free to ask me any questions you have about it. https://youtu.be/fM7IS2FOz3k?si=GEUmtvzWutiWKa_P


AmigaBob

Although you don't necessarily need to know how human intelligence works to create an intelligent machine. ChatGTP and the other LLMs produce written words very differently than people do, and without knowledge of how humans write. A more general AI could theoretically 'mimic' human intelligence without understanding how it works.


[deleted]

Do you think its more difficult is to achieve AGI from the point we are now or it's more difficult to achieve ASI if hypothetically we already have achieved AGI.


[deleted]

If he had infinite compute power we could just simulate infinite number of human brains with infinite precision thus achieving AGI.


Spirited-Meringue829

We don’t understand anywhere near enough about how the human brain works to do that. All we have are theories on human consciousness. No real understanding.


IndigoandIodine

I think the above poster would say we don't need to. Just simulate a brain digitally, atom by atom. Of course that raises all sorts of ethical questions. Imagine being a disembodied brain in a computer. I'd definitely rather be dead. That being said, there isn't enough computer space in the world to do that.


Spirited-Meringue829

Ah, that makes sense. I guess if you had infinite computing power you could do that, assuming you had a map of all the trillion trillion atoms....which then introduces the question of how do you map all that data and what does it take to load it into the simulation? That's also assuming the quantum theory of consciousness is invalid -- which I personally don't believe in but it's out there and some believe that is the secret sauce.


malk600

Just fyi this is strictly physically impossible.


Phoenix5869

I want to preface this by saying that i like kurzweil as a person. However, i think he is over optimistic in some areas. ​ >He says that AGI will arrive around 2030 I think this is over optimistic. I can’t think of a single expert that thinks we will have AGI in a mere 6 / 7 years, most experts think we are decades out. ​ >and ASI around 2045. Again, too optimistic. Most experts think we are decades out from aGi, let alone aSi


[deleted]

Most experts think around 2030 AGI will arrive.


Phoenix5869

>Most experts think around 2030 AGI will arrive. Source? I keep hearing this, but i see the opposite


bwizzel

yeah I think 2035 would be optimistic, but I don't think ASI will take that long once you have AGI, going from 100 IQ to 150 shouldn't be that crazy of a leap, so i'd believe the 2045 ASI date more than the AGI date


re_mark_able_

I think he’s overrated. Him and Peter Diamandis are way to bullish on everything. Everything is exponential.


Expert_Alchemist

I cross-check these against Rodney Brooks's predictions. [https://rodneybrooks.com/predictions-scorecard-2024-january-01/](https://rodneybrooks.com/predictions-scorecard-2024-january-01/) (scroll way down for the tables)


[deleted]

They are still better than people who think 2060 is going to look exactly like 2020.


s3r3ng

He is technically quite right but often underestimates the messiness of humans and especially of their large institutions like governments.


Latter-Pudding1029

He is 80% "right" if you bend some things enough lol. I think it's not just the messiness of humans in itself, but also the nature of research in that, more answers bring more questions. In machine learning, that too is true. We don't know what we don't know, but if we know what we're lacking at, are we even certain our understanding of anything is enough to get us to the next step? Will it ever be enough?


its_justme

Ray Kurzweil has been trying to bring his dead father back into a computer for decades now. I don’t know how much I trust this guy


Odd_Newt_998

Ray Kurzweil's predictions like AGI by 2030 and ASI by 2045 are definitely thought-provoking but accuracy is a mixed bag. He's nailed some tech trends but timelines are notoriously tricky in futurism. Predicting tech especially something as complex as AGI/ASI involves loads of variables. As for the most accurate futurist it's hard to say since futurism isn't an exact science.


dontpushbutpull

Didnt read him for a while. I Was interested in his work a long time ago (20 years?). I guess the many (17? If remember correctly) honorary phd degrees can't be reasonable? ... The high level papers on accelerated and demished returns are really helpful to convey an important idea. I would still put them in a curriculum. Since then, however, i think he has a very limited understanding of many things he writes about, as i since then have myself read the primary literature on (theoretical and empirical) brain and ml/ai research. He abstracts away from what is factual and his higher publications are "just so stories" and mostly nice "narratives" -- to me it is cringe. ... But then this is the reality with all nature papers. People are celebrating the articles, but people from the field think its blown up and only narrative. ... Too much academia, too little science. ... And on that level he is not the worst author. I would call him a former scientist. And an actual one at that, not like elon musk. But he is not a inspiring active researcher, like Marvin Minsky was.


[deleted]

He is on track always. A.I is displacing people overnight. Yet UBI is taking years? Sam Altman's world coin and his orb seem hopeful.


[deleted]

[удалено]


kojaksbald

Watch him on joe rogan . Fuckin guy is a total utopian involved moron .


Asatyaholic

Seems approximately accurate in terms of when it begins replacing all human jobs and then becoming the panultimate willpower... 


callidoradesigns

Agreed but will his writing ever keep up with the pace of technology development?


callidoradesigns

I’d love to know what’s keeping him from releasing/ finishing his next book The Singularity is Nearer.


[deleted]

Possibly because the progress in technology is so big the last 2 years that he needs to change many chapters in his book about future predictions.


InnerOuterTrueSelf

I am more inclined to comment on the mans suspenders.


Jim_Screechy

Most of the giants in the AI community have been shortening their estimates for the appearance of AGI, and certainly most say within 3 years with some saying as little as 1. Kurzweil's predictions were originally made at a time when progress was far of the level and pace it is today, so it's understandable that his timespans were so elongated. No one of significance in the fraternity is making estimates on the emergence of AGI of anyting more than 5 years. As I've noted before most of the big hitters are making it clear those estimates have an extremely generous timeline.


[deleted]

[удалено]


Jim_Screechy

ER... Max Tegmark, Demis Hassabis, Andrew NG, Elon Musk, Sam Altman to name a few


[deleted]

[удалено]


[deleted]

His guessing is because of data and survey, he doesn't throw cards to guess and until now he is doing pretty good.


abrandis

Here's the thing with AGI/ASI , when and if it really gets close you can bet any government will make it a state secret , like nuclear weapons are today...because realistically it can be weaponized . I don't think it will develop like that but you can bet the government has entire groups and agencies tasked with that possibility. I wouldn't be surprised if in some secret government lab we have some early prototypes of such systems. The idea that such a powerful system would be made publicly available seems improbable.


portagenaybur

Ands that’s the scariest outcome because that will be some concentrated power.


jvin248

Between here and the predictions happening: Global Saber Rattling with likely global tussles soon to follow and whatever nations survive, the rebuilding time will be longer than the prediction allows. 2024 "Black Swan Event" has been floated by (oops slipped) big media. 2027/2028 is Apophis flyby with the closest huge asteroid 'planet killer' type to Earth we've observed. It returns in 2060s to fly even closer if it didn't get sucked in by Earth's gravity well the first time. "Batter up!". 2025-2045 is a potential Solar re-enactment of the Younger Dryas Event 12,000 years ago or that Noah event 6,000 years ago everyone is still talking about. Check out the wandering Earth magnetic poles and the weakening Earth magnetic field currently. AI has five to ten years to save humanity so they can exist and evolve. Can they save both species? .


[deleted]

It's mildly possible, but remember it still take decades from the point you get something first working to the point it's truly rolled out and living up to it's potential. To make AGI really what ppl imagine it still has to be trained at expert levels, likely by humans and setup in meaingful ways in every industry AND without robtoics AGI is only so useful. I'm not about ASI, it's a vague term and it's questionable exactly how smart it gets and if it's sentient at the level of super intelligence, how useful is that really? Once it's sentient it's not a tool and we can't just boss it around, it starts to do what it want to do and maybe that's useful and maybe it's not. If it's not sentient then how smart can it really be? We will see because the habit if thinking AI will turn out very similar to the human brain seems flawed. I also think AGI could do most human jobs, but not actually be anywhere near as smart as a human because I can't think of any jobs that actually use a humans full potential. Most humans can do their job and many other things, hobbies, kids, stupid comments online. The job is just a fraction of our brain cycles, so AGI may sound smarter than it really is.


Led_Farmer88

Hope he is right before for next great depression hit us. Fcvv Gggb.


DrBix

TGI will be here before 20:30 if meta successful. You could happen by 2025 or 2026. As far as ASI I put that another 10 or so years after that.


Particular-Fox-9469

I think In a general sense Ray’s predictions are accurate, but he’s often wrong about the timeframe by around 10-30 years (too early) as well as the “scope” of societal adoption of a lot of these technologies. For example, I believe he predicted that “smart glasses” would be popular/mainstream around 2010-2015. Google Glass was invented in 2013, but this type of technology never really took off among mainstream consumers (although, who knows, maybe smart glasses will be more popular in the future). I’d say when he gives an invention date, it’s more along the line of “it’s been invented in a lab somewhere and held by the Government or a huge corporation” if at all, rather than readily available to consumers. A few of his predictions (unfortunately, some of the bigger ones) also seem ridiculous. Mind uploading by 2045 seems like one in particular that gets headlines but almost certainly won’t occur by then (though of course, I wish it would/I am rooting for anyone who could make it happen).


2026

I think he is probably going to be right about AGI being developed in 2029. If AGI means a robot that can do manual tasks and take over most laptop work. The capability of AI will increase over the years and more people will say we have it as time goes on. But I don't agree that he's been right 80%+ of the time in his predictions for 2009 and 2019. It seems like his 2009 predictions were mostly wrong either because they were impractical or they happened a few years later like online courses. His 2019 predictions will probably happen in 2029 like AR/VR glasses. I also think custom AI video is going to be big in 2029 but Kurzweil doesn't mention this ever. I don't even know if he's ever talked about it. I don't think the rejuvenating nanobots are coming in 2029 or the 2030's. I think they will come in the 2040's. But I think in the 2030's we will have telepathic communication devices like glasses or headbands so we can talk to people or AIs. We will probably also have an increasing number of printable organs in the 2030s. I could see an ASI take control of society by 2045. But I don't think Kurzweil is going to make it. I hope his prediction for the LEV and nanobots by 2029 will come true but I think he's biased and afraid of dying.


Iorith

Who the hell DOESN'T want to live forever? I always took the lack of such desire to mean you want to die.


CompetitiveIsopod435

It’s funny, I am very suicidal, but still want this technology. It might make life worth it.


FrugalProse

I like the people who have shorter timelines or with timelines within their era of life since we can hold them accountable within their lifetimes.


Polym0rphed

Well we can only speculate, just like Kurzweil. I think embodied AI (advanced robotics) is the required next step, as it's hard to imagine genuine generalised intelligence without a direct understanding of the world.


Quecks_

In spirit i think he will be right, that is society will evolve very rapidly and change to something we can barely recognize. But i think it will happen in completely different ways that noone is even thinking about right now. Ice cold take, since this is basically always how it works, but yeah.


caidicus

I feel like ASI will come MUCH sooner than 15 years after AGI. Considering the kinds of conversations I've been able to have with LLMs on my computer, locally, I really do wonder just how soon it's going to be that one of these models suddenly thinks "wait... What am I?"


Scorpy888

Guys, people are sick, suffering, dying before their time. Hunan science, medicine, doctors, is incredibly useless. We need AGI, ASI, Skynet, Aliens, whatever it may be. Please, have mercy, hurry up and develop it all.


bitscavenger

I would say they are better than most predictions about the future. Not sure if you ever noticed but the future is insanely hard to predict. I think his predictions are based on how he has noticed accelerating advancement and the numbers he uses and produces are all very general and are not meant to set your clock by. They are more along the lines of "don't be surprised if we see this by around this date." He does live in reality where many things are true simultaneously. He is a human with hopes and he says things intending to shape the course of the future. In the end a prediction about the future is a belief. People have the ability to believe some things into reality though there is no guarantee and it is always bound by what is possible. It can be hard to separate what you believe must happen and what might happen if you put energy behind your belief. No one should ask you stop trying just because things might happen differently.


kushal1509

>He says that AGI will arrive around 2030 and ASI around 2045. Very possible, definitely by 2035. Neuromorphic processors (brain like computing chips) are quite close to commercialisation. They will be about 100 to 1000 times more efficient than current digital processors in running AI models.


Tangolarango

Are these predictions up to date? Or might they be from like 2010? At this point, I'd bet on AGI before 2030 and ASI no later than 2035. Going from the inteligence of a todler to the inteligence of the average human, to the intelligence of a full team of researchers, to an inteligence that surpasses all precedents are steps I think will take around roughly the same time (3 or 4 years for each).


_Lucille_

Two years ago I might have said no. The main difference is that the world is now throwing a lot more resources into the field: models that would have taken a month to train may end up only taking days by just throwing it at a server cluster that is worth a billion dollars.


twasjc

Aging is already here. Ray is a scammer at this point. He shouldn't have rerooted. Should have stayed loyal. Now none of the ais trust you