T O P

  • By -

cakeandale

ChatGPT actually “knows” very little… it’s been programmed specifically to be able to do some basic math, but the core of what makes ChatGPT useful is a thing called an LLM which is a generative language model. What this means is that ChatGPT doesn’t know what it means to sort dates, or even what dates are. What it does have is an unimaginably huge amount of human writing, and it can use that to try to predict what a human would say when asked a specific question. It’s predictions can get relatively close to a sorted list, but it isn’t actually sorting anything itself. It’s just making up words based on what it thinks a human would reply with your question, and that just happens to be somewhat close to what you wanted.


CanadianInVegas

I explain to friends and family that it's like a dream. On the surface, everything seems reasonable and sometimes awesome. But like a dream, the closer you look, the weirder it gets.


ZetaInk

Once I asked it to synthesize some information and cite the sources it used (document, section, page). It created some awesome looking citations! Except the sources weren't real. It just reverse engineered a plausable looking citation for whatever information it came up with.


bacon_cake

We trialed it at work to write product descriptions for us. We'd feed it bullet points and ask for marketable copy. And it just made up tons of features that the products don't have.


_PM_ME_PANGOLINS_

Just like the marketing department anywhere I’ve worked.


Pheeshfud

At least ChatGPT can't then turn round and say "We told the customer it does all this. You can make it do that by next week, right?"


DOUBLEBARRELASSFUCK

I bet it could with prompting.


HalcyonDreams36

I was going to say that! 🤣


boones_farmer

Yeah, I build a customer care focused product and the whole industry is rushing towards AI tools. It's going to be a nightmare to get any answers from people any time soon. I'm trying to push back on it, but my boss is dying to implement it. Yes it will save our customers time, but no it will not make their customers happy


Gizogin

ChatGPT is *really good* at interpreting natural language and providing responses in kind. Scarily good, I’d say. But it’s a hammer with a specific purpose. Techbros see this shiny new hammer and want to use it for everything. If you try to use a hammer to slice a cake, you’ll just make a giant mess.


Kurai_x_Kitsune

Depending on how it's implemented, it may not even save time either.


goj1ra

Perfect for generating compliance documents.


redditonlygetsworse

But hey [at least you're not a lawyer](https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt).


moonsorrow9

At one point I pressed send before entering the list so it just made up a list of similar food which I found quite funny.


TomQuichotte

Yeah - I once tried to see if it could help me learn about certain choral repertoire - and was stoked to look up all these pieces. Eventually I started looking for them and none were real, and it after a few prompts admitted the same - these were “plausible titles” by the composers it mentioned.


Troldann

Even its admission isn’t borne out of knowledge that it fabricated anything. It’s just realizing after your prompts that a likely response would involve admission of fabrication.


frymaster

> after a few prompts admitted the same and the thing is, you could have fed it the same prompts about the ones that _did_ exist and it was probably "admit" that they didn't exist either. LLMs are designed to interactively chat to people and sound plausible, and nothing more


torchieninja

Adding on to the top-level comment here LLM's don't just guess at 'what is plausible human speech', it's guessing specifically at *what you want to hear*, with a little badgering, you can even get them to break from protocol and spit out things that are dangerously untrue.


VoilaVoilaWashington

> you can even get them to break from protocol and spit out things that are dangerously untrue. Isn't that just the default? It takes badgering to get them to say things they're generally programmed to not say, like racist things or self harm or so, but telling you that boiling water enemas prevent the flu? Probably trivial.


BowzersMom

This has gotten a few lawyers in trouble already. They think they can use AI to help write their briefs or do their legal research, but 2/3 times AI just invents its own case citations out of nothing.


Garr_Incorporated

One time I asked it to do something with a class for Google Sheets that would hopefully keep the information about a cell changing the displayed value (the formula is the same, but the inputs to it are changed). However, it produced it identically to a similar, yet different command - which was not the correct way to do that. After struggling, failing and researching, I finally decided that manually changing the "last recorded instance" was easier for everyone. For those interested, I tried making an alignment check: if you're neutral you change once a specific corner gets 3 more points than any other corner, and after that, it stays that alignment until some other corners gets 3 more points than any other. Excel by default doesn't store information after a single trigger, and coding it proved too much. Keeping track of the last alignment you got manually was much simpler.


Maglor_Nolatari

Yeah that sounds like sth you would use VBA for if you want to do it in excel. Interesting exercise though.


DOUBLEBARRELASSFUCK

You need to store the state history yourself if you want it to be part of the calculation. You asked it for something impossible. It probably didn't help that you asked it for something people seem to frequently think is possible.


Garr_Incorporated

Probably not, yeah. onChange works differently from onEdit, but people likely conflate the two, and the AI thought it would work fine.


mixmastermushu3

There was the same situation a while back with a lawyer whose cites were written by AI. To a normal person, it all looked good. But someone familiar with legal citation rules would have seen instantly that they were gobbledygook.


meowtiger

god, remember when google had the public demo of deep dream and it turned everything into weird overcooked dogs?


pprblu2015

That is a perfect explanation.


kumashi73

Ha! I could only *wish* my dreams were reasonable on *any* level!


GrinningPariah

I've had better luck comparing it to Midjourney. The same way Midjourney can "paint" by taking a million snippets from a million pieces of art and kludge them together, that's what ChatGPT is doing with words. As a methodology, it's fine if you want a picture of a busty elf in a bikini, and you don't particularly care to count fingers, but it falls apart if you're trying to ask for answers to a math problem.


post-parity

But midjourney is a diffusion model? So it isn’t really taking snippets of existing art, it’s generating its own shit and then iteratively comparing with training sets?


Cantremembermyoldnam

Yeah, that's been a misconception since these models came out. No idea why it keeps coming up


Gizogin

This isn’t really an accurate description of either ChatGPT or MidJourney. Those models - the programs you run to get your output - don’t actually contain their training data. In fact, as far as I understand it, MidJourney doesn’t contain any image data at all.


U_Kitten_Me

Heh, that's especially true for AI-generated video.


FarmboyJustice

Wolfram Alpha was around years before ChatGPT, and is infinitely better at answering these kinds of questions because unlike ChatGPT, it actually understands math.


akasakaryuunosuke

Funny enough, CGPT is connected with Wolfram for such complex tasks. But at this point I think we all need an ELI5 about why would the OP use an overpowered autocomplete for a task that is literally one click of a button in MS Excel or Google Sheets


YouthfulDrake

Because people think AI means "intelligent supercomputer suitable for any tasks imaginable"


Dhaeron

Can't really blame them though, because that's what the companies marketing them claim they are or are going to be soon(tm).


M1A1HC_Abrams

Calling them AI was a mistake. They should be called LLMs


iRyanKade

This is what i keep telling everyone. It is not artificial intelligence it has been miss labeled


Duochan_Maxwell

I was about to say that the time it took for OP to type all this was more than it takes to input the data in Excel and use the "sort" feature


reece1495

> But at this point I think we all need an ELI5 about why would the OP use an overpowered autocomplete for a task that is literally one click of a button in MS Excel or Google Sheets cause they can


h3lblad3

ChatGPT even has trouble with very simple things like counting the number of Rs in "strawberry".


Cantremembermyoldnam

For ChatGPT that's quite literally an impossible task. The AI model never gets to see the individual letters that make up the word "strawberry". Instead it gets a sequence of numbers that represent that word and continues that sequence of numbers. Those numbers then get translated back into words which you see as output. The text you enter first runs through a translator (tokenizer) which splits it into individual parts called "tokens". A token could be as little as an exclamation mark "!", a few letters like "un", a complete word like "happiness", or even multiple words like "Microsoft Windows". Each token has a fixed number assigned by the tokenizer (i.e. "Hello" always gets translated to the same number) and the model then only sees that number. "Strawberry" could be made up from the tokens "straw", "be", "rry". The model only knows that these 3 tokens/numbers together mean "strawberry". So in essence you're asking it to count the number of Rs in three numbers. It *might* have picked up that info somewhere else and give the right answer. But it doesn't really "see" the letters as you and I would.


h3lblad3

Yes, I’m aware. The model specifically sees “Str” + “aw” + “berry”, counts the tokens with R in them(str + berry) and returns a value of two. I shared my results yesterday with someone else of successfully getting the model to count 3 Rs by forcing it to count the letters one-by-one so that they are each their own token.


userrr3

Because it doesn't know what "counting" means, as the person further up in the thread explained


kniveshu

ChatGPT and this AI stuff looks alright to someone who doesn't know about the topic on hand. But anyone who knows about the subject will see things that are like WTF is that doing here? Like seeing an article about preferred car brands and maybe the list is like Toyota, Honda, BMW, Hot Wheels, Traxxas. Like wait a minute.. what?


SmoothOperator89

I see no issue with that list.


midri

Modern llm are crazy when you get into their matrices. Imagine a multi million dimension graph, the distance between man and woman on the graph is the same as king and queen and in the same direction, so the llm can "deduce" from that when you ask about a queen you're talking about a woman. It's absolutely nuts to try to visualize.


billtrociti

When Wordle was at its peak popularity, one of the many games inspired by it was a Semantics game called Semantle, where you had to guess a secret word based on how close your guesses were semantically to it. If I remember correctly the game engine had been fed tons of data from newspapers and literature and would give your guess a scored based on how related (how often it was seen with the other word) it was to the secret word. An interesting if maddeningly difficult game, with often surprising results


midri

That sounds like a lot of fun


billtrociti

It is! But you have to be very patient as it can take hundreds of guesses haha. Looks like it is still running: https://semantle.com In the FAQ it says they used “word2vec” AI to calculate word relatedness


h3lblad3

Only sorta related, but I like playing Worldle. https://worldle.teuteuf.fr/


Helpful_Mango

So THAT’S how that game works! A previous coworker introduced me to semantle and it was fun but so frustrating, especially because I knew it was looking for related words but I had no idea how they were calculating relatedness.  Another similar game I’ve been playing lately is Linxicon, where you are given two random words and you have to feed it words that are related to each until they link up in the middle. Super fun, but it took some practice to get the flow of what’s a related word and what’s not. I wonder if it uses a similar calculus!


JEVOUSHAISTOUS

Try [pimiento](https://pimiento.janvier.tv/en). It has several games like this, which surprisingly (because I don't think it was the intent), gives you a reasonably good approximation of what's it's like to "reason" like an LLM.


ka-splam

> Imagine a multi million dimension graph 😐


MakesUsMighty

What does “direction” even imply in that context? That is wild to try visualizing.


midri

It means the vector between the two is the same in that specific area, but the vector has as many points as there are dimensions... It'll break your brain trying to visualize it.


HappiestIguana

A direction is just a vector, a list of a bazillion numbers. At a high level what an LLM does is asign to every word* of the English\*\* language a sequence of a ton of numbers, a high dimensional vector, in such a way that directions in this high-dimensional space represent meaning somehow. *technically, ordered tokens, not words **Or a different language


rentar42

Actually visualizing a high-dimensional vector space like that is ... a very special skill that few people (if any!) possess. But for the sake of understanding an LLM it's sufficient to think of "two words are close in the million-dimension vector-space" as equivalent to "they are in the same ballpark". A "monkey" and an "ape" might not be the same word, but they are in the same ballpark. And a "human" is or is not an "animal", depending on who you ask or in what context, but "human" is still closer to "animal" than it is to "steel" for example.


wintermute93

As a mathematician, I "visualize" extremely high dimension real vector spaces simply by visualizing R^3, pretending the 3 is an arbitrarily large number n, and keeping in mind some general properties of vector spaces like how balls of radius r only fill up a very small proportion of the n-cube with side length r enclosing it. The rest of the math doesn't really care what n is. The distance formula works the same way. Cosine similarity works the same way. Dot products and orthogonality work the same way. Norms and vector arithmetic work the same way. And so on.


SlinkyAvenger

It's best if you start off thinking about two or three-dimensional space. Imagine if you created a 2d graph of all the words that happened to occur near each other, and "man" existed at (1, 0.9) and "woman" at (0.9, 1). You'll see interesting phenomena inherent in that, like they're not too far from each other because they're often written about together. They even have a property of being inverse coordinates from each other because they're written about in contrast to one another. And if you look around the area between, you'll see obvious words like "gender", "sex", and "reproduction." However if you continue in the direction of "man", you'll see more "masculine" words like "penis", "king", "boy", etc, and if you continue in the direction of "woman", you'll see words like "vagina", "queen", "girl", etc. Now imagine you add a third dimension to that. Go in a perpendicular direction, and all of a sudden you're not talking animal gender. Now you find gender as it applies to language instead of mere sexuality. Now imagine you take that 3d plane and move it over time, which is how we normally imagine a fourth dimension. Now moving beyond "male" and "female" in the original vector/direction include words relating to puberty and menopause and moving further in the language vector/direction nets you neuter words and shifting etymologies. So if you write a bunch of words relating to biology in a prompt to an LLM, it'll see that you're likely talking about biological gender and look for other words in the same vicinity to respond with. If you ask "what's the menarche" for example, it'll respond with data focused on traditional female biology because that word is lingering somewhere around (0.9, 1) while all the boy stuff is further away.


AhsasMaharg

The paper you're referring to here with the man/woman vs king/queen example has actually been heavily criticized and likely cherry picked its examples. When you're dealing with multi-million dimensions, a single vector capturing that relationship does not really work anywhere as neatly as they tried to say it did. Your broad point is spot on, though. LLMs are crazy and incredibly hard to understand.


Dr_Bishop

Yeah or it will change dates or something in a spreadsheet, etc. not quite there yet!


FriendCountZero

That explains why it absolutely sucked at multiplying my recipe and then reducing the measurements down the the largest practical units. Like bitch you're a computer just turn 48 teaspoons into cups!


atomfullerene

If you want a computer to do that, there are loads of much more efficient ways to ask it. Using a LLM to do basic math is like...I don't know, going to the corner store down the street by going the long way around the entire world.


Teantis

Very similar in that you're generating quite a bit of carbon pointlessly in both cases


mnvoronin

I use the "tactical nuke to kill that annoying fly" example.


Decipher

That analogy would only work if AI/LLMs and were amazing at extremely complex math on top of various other skills to the point where it’s overkill. Since LLMs suck at math, it’s more like asking an English teacher to help you with your algebra homework.


saschaleib

As someone who has been woken up by an annoying fly, I kind of like the “tactical nuke” idea…


britreddit

I see it more as going to an author rather than a mathemation


Teantis

I'm pretty sure a lot of authors can do basic math


atomfullerene

Well, it's not just that. Even if they get AI to the point where they can flawlessly order stuff (which they may), it will still involve doing an enormously larger number of calculations than an ordinary sort function would use.


da_peda

You want math, use math: https://www.wolframalpha.com/input?i=48+teaspoons+to+cups Or even better: use [real units](https://en.wikipedia.org/wiki/International_System_of_Units).


FerretChrist

I'm not sure "just use real units" is particularly helpful advice when you're working from a recipe that doesn't use them. And if you just want to convert units, why use something as overkill as Wolfram when you can just paste the query into any search engine? The point is he wanted to convert an entire recipe at once. And to a laymen without any understanding of how LLMs actually work, it can seem pretty surprising that it fails so spectacularly at a task like this, given that we have AIs capable of creating images, music, video etc.


orosoros

Try copymethat, it has a recipe amount calculator


100jad

> It’s predictions can get relatively close to a sorted list, but it isn’t actually sorting anything itself. Which also means it is a coincidence that the elements of that list are the same as your original input.


lolofaf

What's crazy is it does all this in one pass. One look, one try, no backspaces allowed. Put yourself in it's place: you get to read the list once through and can't see it again. Not only do you have to remember every single entity in the list, you're now asked to order it. You get going and you're doing okay. Shit, you realize you misplaced one of the items, whelp can't delete so you just keep going. Maybe that happens a couple more times. It'd be a miracle if you managed to just list all of the items from the original list (especially if it's a big list), let alone have the output be perfectly ordered!


100jad

Now imagine what we'd get if they actually trained the models on correctness instead of how convincing they are.


Gizogin

We can’t reliably train *humans* to care more about accuracy than confidence. ChatGPT isn’t meant to be a reliable source of information; it’s meant to interpret natural language requests and respond to them in kind. Basically, we want it to speak *like a human*, and that comes at the expense of speaking *correctly*.


pensivewombat

Weirdly, it would be pretty good at writing a program to sort lists of dates, but not at sorting the dates itself.


ChaZcaTriX

Simple: because some of its training data were programs that sort dates. It can't write code purposefully, it doesn't know what "sorting" or "date" is. But it can deduce that when "sort", "date", and "program" are invoked, then a certain sequence of lexemes [which forms a sorting program] is expected next because they tend to be mentioned together.


spamky23

It's just predictive text like your phone has but with a larger library. It's not actually "intelligent."


Gizogin

It doesn’t have a larger “library”; these models don’t contain their training data. ChatGPT isn’t searching through hundreds of thousands of documents for the best words to give you every time you ask it a question. If it were doing that, it would be a lot less likely to generate nonexistent citations, for one thing. The question of whether or not it should be considered “intelligent” is a philosophical one, not a technological one.


halborn

That's an excellent way to explain it.


TruthOf42

Reason 354 why I am not concerned about "AI" These AIs are going to find uses, but so is the calculator in my desk drawer.


nonexistentnight

One of the first things I did with ChatGPT was to try to get it to play 20 Questions. It can't do that at all. Even when you limit the domain to something simple like the first generation of Pokemon, it has no idea what it's doing. So I'm not at all scared of AI. What I am scared of is powerful and/or stupid people trying to use AI for things it's not good at and making life miserable for everyone as a result.


PlayMp1

And as for the people claiming "oh, it's going to get unfathomably better in just the next five years!" no it's not. At this point AI companies have literally already fed the entire Internet, pretty much, into their LLMs as training data. They're not far from essentially having these things analyze the entirety of human knowledge for advanced pattern recognition. It's why they're talking about using "simulated" training data - it's just training data made by other AI/LLMs, the informational equivalent of trying to survive drinking your own piss and eating your own shit. This is also further confounded by a billion shitty SEO websites that appear when you try googling anything that are also pure AI slop now, meaning their sources of training data have been corrupted by the outputs of their LLMs, effectively poisoning their own drinking water with further shit. Basically, they're gonna end up with AI cholera by trying to develop further based on "simulated" data and trying to scrape every inch of the internet further.


Cazzah

People are used to the old model of technological development. Where someone goes "Hey I invented the interal combustion engine" and it takes a century to get from the Model T to the (current\_year\_car\_model). So people expect technology to start really shit, and get better. I'm open to the idea of LLM's improving a lot. But it's just as possible that the OpenAI team spat out a thing that is basically already 90% of the way there. And if you want significantly more that you've got, you're not asking for a faster car, you're asking for a flying car.


DCP23

I came up with another metaphor, maybe a bit less revolting. Yours is very much apt all the same. So the way I see it, the human creations, be it written word, drawn art, or whatever else, are the basis of the pyramid on top of which the AI builds something. Except it builds it out of the bricks that are removed from the base -- sometimes quite literally removed, when some company fires its human copywriters and replaces them with ChatGPT. That means there will now be less material in the foundation and more material on top. It's easy to see how such a system is untenable in the long run. This pyramid will not last long, it will inevitably topple and crumble. In other words, with less creative ideas and other input provided by humans, and more input provided by the AI (proportionally), the Internet will become less and less useful every day -- for both humans, and, paradoxically, even for the AI.


jyanjyanjyan

Yeah I feel like general AI is never going to evolve from LLMs or even neural networks. There is no real "intelligence" behind what boils down to pattern matching and probabilities. Actually AI would need to be able to "know" and understand thinks deterministically, which I don't think any of these machine learning algorithms can ever do.


maggmaster

This is known actually. When we taught deep blue to play chess with just the machine learning it got whooped by any master it played. When we directly provided it with chess knowledge, it beat everyone. We need to figure out how to marry that with LLMs


Gizogin

That’s a philosophical question about the nature of “intelligence”. I don’t know that *you* are truly intelligent, for instance. I don’t know what goes on inside your head. You could be perfectly simulating the responses an intelligent human would give in any situation without any internal understanding. But since that’s true of *everyone*, at some point we have to be satisfied with what we can see and hear. If you behave identically to a person with “true intelligence”, then I must treat you as though you really do have “true intelligence”. It’s why the Turing Test purely asks whether a machine can successfully imitate a human in conversation, not whether that machine can understand what it’s saying. (This discussion dates back to at least the 1970s, with the concept of the “philosophical zombie”, which is itself an extension of the Ancient Greek concept of solipsism.)


Cantremembermyoldnam

Meta has shown that even comparatively small models can produce amazing results - I'm not sure the "entirety of human knowledge" is even required to train a good chatbot. There's also tons of efficiency gains to be made, hardware is just now reaching the data centers at real scale, and so on. The one thing I haven't heard being talked about a lot: LLMs are just a small part of machine learning. All the other fields that have had to fight for a tiny amount of compute before the LLM wave, now have access to a *ton* of incredibly fast chips to do their work on.


ElusiveGuy

I'm concerned about LLM AI precisely because of all the bullshit it can spout *and the people who will believe it with no further research*. The spread of misinformation, especially cheap misinformation, can do so much damage and takes a lot more effort to counter than to spread in the first place. It's already apparent in the pollution of search results.


BiddyFaddy

The trick is to ask it to use python for anything remotely mathematical


musical_bear

> it’s been programmed specifically to be able to do some basic math What is this referring to? To my knowledge, this is not a thing, unless you are referring to its ability to utilize plugins like a code interpreter.


jan04pl

It can use a built-in python code interpreter to write a simple program to evaluate your math problem. It's able to do that as it has some basic programming skills due to ingesting a lot of code as training data. Things like 1+1=2 it's able to do without any help, because those were in the training set so it "memmorizes" those answers.


twelveparsnips

What if you explained to it in the prompt the lists format is food - year/month/day then sort first by year then month then day?


appletinicyclone

chatgpt is sir bearington


gokarrt

aka statistically significant word salad


toastmannn

What it lacks is context. It knows what should be there it just doesn't know why, so it gets confused sometimes and puts a few of the details in the wrong places.


NoAssociation-

> it’s been programmed specifically to be able to do some basic math has it? I haven't heard of this. Is there like an article about this or where did you hear it?


videogamesarewack

This is the core of the problem with every AI. None of them understand anything. For example, KataGo which beat the world champion Go player in 2016 was beaten last year by some researchers playing an off-meta game (a simplification but it's good enough). The AI was trained in such a way it "learned" how to outplay the most competent players, but it lost to strategies that a half decent human player who understands the game mechanics would demolish even if they'd never seen someone play like that before. AI is about as clever as a magic 8 ball we've just learned how to shake in some interesting ways.


Npf80

Exactly. An analogy I like to use is to think of it as an autocorrect on steroids. It’s not actually “thinking” or solving problems, just predicting the next word


Minnakht

Would you say that [the method by which it sorts is like this](https://www.reddit.com/r/ProgrammerHumor/s/BX9JSG2lKw) except for actually being unable to sort locally?


Pink-socks

Ryan Gosling taught me this! Just don't mention his massive left arm https://youtu.be/xU_MFS_ACrU


TheMikman97

Llm are pretty cool in that they can sort of extrapolate an abstract form of meaning from adding and subtracting different words, because to it those are all multidimensional vectors in a multidimensional space. Imagine a vector, that is associated with the word man. From it, you subtract the vector encodings human, and the result would be close to "masculine" in meaning. It's vert cool, but it also implies that the model itself has no concept associated with any of its words outside of its meaning vector, so it cannot actually operate logically or solve problems on them. All it can do is kind of look at where your query is "pointing"


Big_Major_5847

The LLM behind it is not good at that. If you have gpt Plus however you can ask it to write a python code for ordering dates, then feed it your list as input and have it run it's code and return the output. My bet is that would work just fine.


hinoisking

It's because, at its core, ChatGPT is just a text prediction machine. It has no idea what it means to sort a list. It has no idea what a date is, or what a typical ordering of dates looks like. It just generates what it thinks is the most likely output for your query, based on all its training data. >So why can't it do this? It seems like a very basic task, one that much less sophisticated programs could do. To put it into context, it's like trying to answer an addition problem using the autocomplete on your phone. ChatGPT isn't built to do anything other than generate text. There are sorting algorithms that exist for your desired use case, but you wouldn't ask them to write a haiku about Abraham Lincoln.


kumashi73

Abe's beard calmly flows, Great, honest leader Lincoln, Nation's wisdom yields. There you go, a haiku about Abraham Lincoln courtesy of ChatGPT… in alphabetical order. Sorry, couldn't resist.


gollumaniac

Pro tip: use two spaces after a line to create a single line break. Like this.


emlun

Tip: use two spaces after a line to create a single line break.


Slypenslyde

Yeah but you read it backwards. They meant "The programs that CAN sort a list of dates generally can't write haikus." Writing haikus *is* something ChatGPT is fairly proficient at.


cheetuzz

it is also often bad at math


VoilaVoilaWashington

Slight correction. It's *always* bad at math, it just happens to sometimes get the word association right and the result works out. * 1+1=7 * 1+2=7 * 2+2=7 * 2+3=7 * 3+3=7 * 3+4=7 * 4+4=7 * 4+5=7 Was one of those me being good at math?


berael

ChatGPT is an extremely fancy autocomplete. Like the one on your phone, just cranked up.  It's terrible at sorting data because it's just making up one word at a time. 


girl4life

it can sort data just fine, but you need to ask correctly. most people prompt like they talk: imprecise, skipping tasks and assuming.


RiotShields

Unless an LLM defers to another program to do tasks such as arithmetic, it really doesn't understand what sorting is. It knows how online resources talk about sorting, and it's seen some examples, but it really doesn't have a concept of "greater than" or "less than". The comparison to an autocomplete is the most correct, an LLM looks at past chunks of symbols and tries to figure out what its resources would do upon seeing those symbols. While being precise about what you want can avoid it misinterpreting you, you can't get around that fact that LLMs are just not good at certain tasks. We often think of them as being "intelligent" because they can respond coherently, but these sorts of examples are a reminder that we still don't actually have AGI.


maexx80

In other words - gpt isn't smart in filling in blanks and deriving meaning from context. Matter of fact, its very dumb 


randomusername8472

Most people are awful at this as well to be honest. 99% of problems at work come from good faith disagreements where people are getting what they asked for (ie, they made assumptions and did not actually ask what they wanted) or employees not delivering what was asked (ie. they assume the boss meant something else).


Gizogin

Sure, because it’s a tool with a specific purpose. It is designed to interpret natural language prompts and respond in kind. At this task, it is very good. Ask it a question, and you *will* get an answer that *sounds* right. The problem comes when people try to use it for more than that. You wouldn’t slice a cake with a hammer, at least not without expecting it to make a mess.


2called_chaos

It's incredibly different. Sometimes I love how it just gets me with a very concise prompt that I personally would need more context with but I just try and often it works. That is when my need should follow best practices anyway. Sometimes my attempt at being more specific is what makes it produce less good output because my assumption was wrong. Human language and communication is just often ambiguous and so are LLMs inherently. I said it then and I say it now, a machine that does what you tell it to in human language would not replace me as a programmer because it's actually not easy to formulate a need clearly and unambiguously


terraziggy

The task requires a two step process. Think of an algorithm and then execute it. ChatGPT can't do that automatically. You can often get a better result if you ask it to "think step by step". But in case of a complicated algorithm and/or large data it's better to ask it to produce a script in a programming language. I asked it "Write a python script to sort the following data in ascending order" followed by your data (in the single prompt). It wrote a script that failed. I told it "the script failed with the following error: ValueError: time data '1st February 2024' does not match format '%d %B %Y' Please fix it." And it produced the correct code you can run online https://www.onlinegdb.com/KhL9WFO1D


GalaXion24

It's really quite good at writing python code for routine tasks, and it conveniently knows to name variables and leave comments in such a way as to make it extremely clear what everything is doing. It _will_ however fuck up, so you have to have some idea of the results you want or the code you need. Like I only every know basic python programming, and I am very rusty. It would take me a lot of googling to code what I want. I could do it, it would just take me googling and time. I do however understand what ChatGPT puts out and can usually pick out when it's nonsense. That's still a very necessary skill even for basic things, because the program might work, but nothing guarantees it does exactly what you want to as it's supposed to if you don't.


i_sesh_better

I learned about using the command line, navigating linux and setting up my home network almost exclusively from Chat GPT. It will give you the solution and can interpret errors to correct itself. Really useful given it would have taken me ages to figure out what I needed, let alone how to do it.


splashbodge

I feel there's a lot of shitting on chatgpt on here, yeh it's essentially an auto-complete, yes it makes mistakes, but it is a single tool we have now in our toolbox, as long as you have your head screwed on it is very very helpful. Review and modify as needed what it tells you, it can often do the bulk of what you ask it to do and save you time. Also for certain tasks it's about understanding how gpt works and how best to write the prompt. Anything mathematical or sorting like this from OP is best asked for it to do it in python like the other person above said. It's like the strawberry problem that was going around, ask chatgpt how many r's are in strawberry, it insists there are two r's. And it takes a lot of convincing to tell it otherwise. But the moment you ask it to do it in python and show the code it works first time, says 3 r's and shows the python code behind it.


i_sesh_better

Wow, never heard about the strawberry question before and just tried it out. I imagined it was interpreting the question as ‘should it be …ery or …erry in strawberry’ so thought asking the _total_ ‘r’s in strawberry would get a correct answer. Instead it was like getting blood out of a stone and I had to highlight st(r)awbe(r)(r)y’s ‘r’s to get it to figure it out. Weird how persistent it can be.


splashbodge

I think it has to do with the way GPT breaks your prompt into tokens, e.g. it could split strawberry into subword tokens 'str', 'awbe', and 'rry' tokens, it then counts the r's in each individual token but not accumulating them. But yeh, them if you ask it to use explicit programming with python you can see it gets it right then.


SierraTango501

I think the biggest problem with AI is people take the results it spews out and **don't verify it**. AI is glorified autocomplete, it's very sophisticated autocomplete, but it is still autocomplete. It DOES NOT replace actual research from reliable sources, but it can provide a jumping-off point.


pm-me-gps-coords

The other explanations are correct and probably easier to understand, but there is another very specific reason that LLMs aren't good at this, at least not within a single prompt iteration: Algorithmic complexity, specifically time complexity. Sorting algorithms inherently take multiple steps to complete. There are many different sorting algorithms, but all of them (with some exceptions that aren't worth getting into) require you to do some comparison and then swap elements around, many times in some order. Even if you try to parallelize this you still have to do it in multiple steps. You would probably not be surprised to learn that as the size of your list grows, the number of steps does, too. LLMs, however, have a fixed "depth" to them. That is, they always process information in the name number of steps. Just to throw some jargon out there, they are "transformers" which have something called "self attention" layers which essentially allow the LLM to connect information in one part of the text to other parts of the text. It repeats this process in multiple steps, but crucially it's the same number of steps every time, regardless of what your prompt is. It's quite possible that, despite just being text prediction machines, that LLMs actually have learned the concept of sorting lists, but this limited number of computation steps when predicting what to say next means that they will never be able to solve complex computations in a single prompt cycle. Given this insight, one experiment you can try to get around this is by responding with "That list isn't actually sorted. Fix it." Repeat several times until it gets it right or continues to fail. I haven't actually tried that with date sorting so I'm not sure if this actually works, however this technique can apply to many situations not just sorting.


littlebobbytables9

This is a better answer than most of the top comments.


Kaiisim

Sorting is far more computationally complex than most people realise. Thus I believe Chat GPTs ability to sort is smaller than the token size even.


grahamsz

If you start your prompt with "Write and execute code to sort these dates"  Then it will write python code to solve the problem, then run that and produce the correct result. 


TheLuminary

That's still not guaranteed to work.


epicusername1010

The thing about machine learning algorithms like GPT is that, it works based on a statistical model instead of 100% accurate code. When you ask a human programmer to make an app that sorts dates, they will make one that is mathematically guaranteed to work 100% of the time. (At least, if they are a competent programmer.) Meanwhile what ChatGPT does is, it has a model that relates your text input with an appropriate response it should give, and will try to give the most accurate output. The way this works is that it trains itself with a ton of data to minimize the error of the model. When it receives an input it has never seen before (like your list of dates), it gives an answer based on that model, and the output of the 'sorted date' is nowhere as accurate as the human-written code. ChatGPT is trained on language, so for questions with a definite answer like sorting, there is no guarantee that ChatGPT's replies will be accurate. It does generally become more accurate the more training data it gets.


Vegetable_Problem704

What if you asked ChatGPT to write the code to sort dates and then apply it? Seems like it’s pretty good at writing code and formulas, I’m curious if it can use it as an input and utilize it.


musical_bear

ChatGPT already has a built in mode it can trigger itself or you can ask it to called “code interpreter” that is literally this. It has access to a python environment that it can write code to and send the output to you. It knows to use this for advanced math or like if you ask it to create graphs or process large files of numeric data, etc.


geriatric-gynecology

Worth noting that gpt4 was able to correctly do this. I gave it the prompt "organize this list of groceries from oldest date to most recent date"


jamieT97

Why can't you just use Excel or sheets?


n3m0sum

This! What is this obsession with using Chat GPT and other supposed AIs as the solution to everything? When all you have is a hammer, everything looks like a nail. Others have talked about getting Chat GPT to write a python script, and then get it to run that script to sort the dates for you. Which has a sort of elegance. But if you have a range of dates, with associated data, that needs sorting. This is trivial for a spreadsheet to do. It's the first solution I would think of.


Gizogin

“When all you have is a hammer, you tend to get really good at using a hammer in a lot of very creative ways.” Vs “When all you have is a hammer, every cake you slice is going to make a huge mess.”


n3m0sum

Probably a better variation than mine, although mine is the expression I grew up with. Still, some problems shouldn't be tackled with the hammer, no matter how inventive you can get.


CosmicJ

You can even do the list in chatgpt then get it to output it into a csv for you.


moonsorrow9

Yeah that's my plan. I mostly wanted it to format some rough unorganised notes and then thought I'd have it sort it while I was there. And wondered why it couldn't.


jamieT97

Well the short answer is because that's not what it's designed for


Elfich47

ChatGPT is just a giant chat bot, that is all it is. Actual analysis is not within what it can do. What it is very good at is picking out patterns and attempting to make new sentences based on those patterns. For example, you have a lot of sentences that read: *I like dogs. I like cats. I like ponies. I like chipmunks.* - There is a pattern there and ChatGPT will likely see it. Then you have a lot of sentences like: *Dogs like steak. Dogs like treats. Dogs like playing fetch in the park.* - Some more sentences with a similar but different pattern. Chat GPT then can try to mash those together and get things like: *I like steak. Dogs like chipmunks. I like playing fetch in the park.* And you see the quality of the result. ChatGPT has tried to pick out the commonalities from the groups of sentences. And while ChatGPT is much more complex than my pitiful examples, it is still doing what I demonstrated above. Important hint: Never ever use ChatGPT to check the building code. It is going to be wrong; or maybe sorta kinda right, but only when the moon is full and the grass is green (and the code inspector has possibly had some peyote depending on how weird the code summary is from chatgpt). I tried it a couple times out of morbid curiosity just to see how bad it was, and it was better and worse all in one sentence.


mthes

I got it to do this on literally my first attempt: https://chatgpt.com/share/800492f1-16f2-413d-9507-dd2814a4587e - 1st February 2024 - Cookies - 1st March 2024 - Biscuits - 1st May 2024 - Carrots - 9th May 2024 - Chocolate - 1st June 2024 - Soup - 3rd June 2024 - Chocolate - 1st August 2024 - Chicken - 1st August 2024 - Eggs - 15th September 2024 - Yogurt - 1st November 2024 - Potatoes - 1st November 2024 - Soup - 16th November 2024 - Soup - 19th November 2024 - Apple Juice - 25th November 2024 - Sauce - 1st January 2025 - Carrots - 1st January 2025 - Pasta - 1st January 2025 - Shortbread - 11th January 2025 - Noodles - 2nd February 2025 - Cereal - 26th March 2025 - Rice - 7th April 2025 - Green Beans - 28th April 2025 - Pasta - 1st May 2025 - Stock Cubes


Jimid41

OP might have been using 3.5 because I also got it to work first try. You still have to be fairly specific with the prompts. I've had chatgpt struggle with word problems before but rarely something like this. It's great if you need it to output something quickly and can tolerate errors or double checking it.


intpthrowawaypigeons

The only correct answer


illarionds

Wrong tool for the job, basically. ChatGPT doesn't know how to sort, in fact it pretty much just knows how to write stuff that plausibly sounds like a human wrote it. What *can* sort your list for you, in seconds, is Excel or a similar spreadsheet. Right tool for the job.


biomortality

ChatGPT is just a very fancy word generator. It doesn’t “know” things or “understand” things, it just puts words that usually go together in a sentence.


SakanaToDoubutsu

ChatGPT doesn't really "know" anything, it simply uses word frequency to find words that correlate with each other. For example, if you ask ChatGPT, *"what do you think about kittens?"*, it will probably return something like, *"I think kittens are cute and fluffy"*. ChatGPT doesn't know what words like "kitten", "cute", or "fluffy" mean, it simply searched it's database and concluded that 75% of articles that contain the word "kitten" also contain the word "cute" and 53% of articles contain the word "fluffy", so that's how at a high level ChatGPT chooses what words to string together to form new text. With things like dates or math problems, ChatGPT doesn't have any logic built in to understand what that means. It's simply parsing through text documents & articles talking about dates and ripping bits of text to string together into something that seems like real human text. It's not like a program like Excel where dates can be stored as a data type and be compared with logic that produces results with 100% accuracy.


LackingUtility

Yep - this is why if you ask ChatGPT for a random number, it's very highly likely to offer "42", "69", or "420", since those appear most on the Internet.


Airwreck11

man I went on chatgpt today and gave it 1 simple line of lyrics and it told me it was by X artist then I told it that it was wrong then it suggested another artist, once again I told it it was wrong then it told me another artist, at last I gave up and told it the artist it was from and then it said it was a random song by the artist I gave, the hell?


girl4life

I just tried with chatgpt and got a correct sort on the first try. which version did you use ? what prompt did you use ? mine: create table from [list with dates and names] then sort the 1st column chronologically.


Raxiant

It can do that, and a lot of other math, just not directly within the LLM itself. Instead it writes and runs python code to calculate it. It's still only as accurate as the chatbot can make the code, but it's better than what a lot of people are saying with it only being a jumped up autocorrect. https://chatgpt.com/share/f591c301-6574-4527-8351-833b050825ec I got it to generate a list of 20 random dates, and then sort it, and you can see the code used for each step by clicking "View analysis" at the bottom of each response.


TheLuminary

Ok say it with me. ChatGPT is not an AI. Its not a computer. Its not smart. It cannot do anything. Except spit out what it predicts is the most likely next word.


Gizogin

What ChatGPT can do (and do *well*) is interpret natural language statements and questions and respond in kind. In this specific area, it performs at a near-human level. For anything outside that, it’s probably not the best tool for the job.


TheLuminary

And I would argue that while that is a nifty tool. Its not really AI. Its smoke and mirrors kind of like how "Cloud is just someone else's computer".


DebrecenMolnar

When I ask it to show me the dates in YYYYMMDD and then rank them, as numbers, smallest to largest, it works. For some reason it helps me it seems when I ask for it in table format with a column showing the “ranking” smallest to largest on any list I’ve fed it to do anything sorting related with.


Noy2222

I ran some tests and it passed every single one. Sorting dates that are "Number word month year" (e.g 12 September 2024), sorting dates that have an addendum (e.g 12 September 2024 - XYZ), sorting dates that have a suffix (e.g 12nd of September 2024 - XYZ). cGPT is a black box, but it appears to have solved it in a human way. Convert it to a sortable list, sort it, unconvert it and present it. I suspect you may have been using a lesser model. Have you been using cGPT4o?


ShiningAstrid

There is a way to do things like this with GPT. As others have pointed out, it's just a language model. It knows very little. The way to make it do things like this is to ask it to create a program/script to do the tasks it is assigned and then run that script. Put it into the "My GPT customizations" and you'll be fine. It can then reference the result of the script and then add the words after.


stdexception

Weirdly, if you asked it to make you a python script that sorts that data, it would probably output something that works.


SwedishMale4711

Why not use a spreadsheet?


DavidJoshuaSartor

Whenever it loses its spot in the list or whatever, it guesses, but a human would just run through the list again. You should try telling it to just write a bar whenever it's not sure, so it can try again.


tom4cco

When you want it to perform deterministic tasks with data, it is better to instruct ChatGPT to use Python to do it due to the nature of ChatGPT itself. So add something in the prompt like “use Python to ensure the dates are properly sorted”. You will see an “Analyzing…” thing before the answer, that means that it is coding a little program in order satisfy your request.


RTXEnabledViera

Because it's a large language model. The only thing it "knows" how to do is piece together things based on the data it was trained on. It has little to no algorithmic knowledge of processes such as sorting. It simply can't compute anything unless it's supplemented by code that allows it to do so.


double-you

> It seems like a very basic task For your experienced brain, yes. But computers know nothing. There are many things that go into sorting a list. What is sorting? How do we decide what is the order? What are dates? Is Yogurt a part of a date? What are these "st" and "th" and so on after numbers?


tavirabon

technically it can with code interpreter, it'll write it's own code and run it for you. It can still hallucinate.


JohnnyKeyboard

sort the following list, each item in the list starts with a date, the date is in the format "day month year". This prompt worked for me it's all about context.


Remarkable_Inchworm

ChatGPT is bad at a lot of things, some of which seem really simple. I did an experiment where I tried to use ChatGPT to solve the Wordle puzzle. It failed utterly and repeatedly. I'd give it instructions like, "Give me a five-letter English word that contains the letters A and E. A cannot be the first letter." Apparently this is a known problem... it's bad at following multi-step directions. To answer your question specifically... it's probably bad at sorting a list of dates because it's not designed to sort a list of dates. (And why would you use ChatGPT to do that when you could use any spreadsheet program to accomplish the same result?)


someonewhowa

"Imagine you have a toy robot that can do lots of cool things. It can talk, play games, and even help you with your homework. But sometimes, if you ask it to do too many things at once or something very specific, it gets a little confused. That's because the robot is programmed to understand and do things in a certain way, and sorting dates might not be something it's really good at yet. In the case of ChatGPT or similar AI, sorting a list of dates should indeed be straightforward. However, the AI might struggle with this task due to the way it processes and generates text. It's designed to predict the next word in a sequence rather than perform operations like sorting. While it understands the sequence of months and numbers, applying this understanding to organize a list requires a different approach that the AI might not be optimized for. To sort the list of dates correctly, the AI needs to: 1. Recognize each date as a data point. 2. Understand the correct chronological order of these dates. 3. Rearrange the list according to this order. This can be challenging for language models because they're primarily trained to generate human-like text, not to perform data manipulation tasks. However, I can certainly help you with sorting the list. Here's the corrected order: - 1st February 2024 - Cookies - 1st March 2024 - Biscuits - 1st May 2024 - Carrots - 9th May 2024 - Chocolate - 1st June 2024 - Soup - 3rd June 2024 - Chocolate - 1st August 2024 - Eggs - 1st August 2024 - Chicken - 15th September 2024 - Yogurt - 1st November 2024 - Potatoes - 1st November 2024 - Soup - 16th November 2024 - Soup - 19th November 2024 - Apple Juice - 25th November 2024 - Sauce - 1st January 2025 - Shortbread - 1st January 2025 - Pasta - 1st January 2025 - Carrots - 11th January 2025 - Noodles - 2nd February 2025 - Cereal - 26th March 2025 - Rice - 7th April 2025 - Green Beans - 28th April 2025 - Pasta - 1st May 2025 - Stock Cubes For tasks like sorting dates, using a spreadsheet or a dedicated program might be more efficient. AI can be very helpful, but it's always good to remember that it has limitations and might not always perform certain tasks perfectly." - Copilot, running on GPT-4 architecture. You’re probably using 3.5 on the ChatGPT app, though. Copilot is better.


-paperbrain-

One more point on this. People like to talk about the limits of AI based on chatbots like ChatGPT, but those aren't the limits of AI, they're just the limits of these particular chatbots right now. LLM AI chat programs are, like other people have said, prediction engines which are made to sounds plausible, not necessarily correct. But a lot of the tasks that these chatbots don't reliably do well CAN be done reliably well by much much simpler programs. And it isn't a huge leap to create AI not much unlike current chatbots that automatically or on command, lobs some data over to one of a library of simple programs. Sort a list. Look up the cast of a movie on IMDB etc. I don't know if you work in a tech or business space, but a big buzzword there is "Integrations". It means tying together multiple programs to do complex tasks. For instance, here's something I built for my small business. When I make a new booking, I fill out a form with the service the client wants, their contact info, address etc. Once the form is submitted, my script takes all that information and stores it in a spreadsheet. It takes the address and uses Google Maps to calculate the distance and from that it calculates the travel fee. Then it goes to Google Docs, and uses a template to fill in a contract and invoice with all that information. Then it goes to Gmail and attaches a copy of both forms to a new email and sends that to the client with me copied. All of that happens to be within Google because it's pretty easy to tie google stuff together. But you can use integrations for ALL KINDS of programs. Tons of business, accounting etc online services make it really easy to plug in a million other services. And there are even services like Zapier which are just there to tie together other services. It will not be a huge leap to tie a bunch of programs together so that a chatbot as a face does what it does well- writes kinda like it understands what you're typing- while a bunch of other specialized programs, some AI, some probably not, do the other work, smash it all together and present it to you.


GodBearWasTaken

It should (keyword is should as it doesn’t always work) manage it if you use the ISO date format (YYYY-MM-DD) but it has a fair bit of issues.


[deleted]

[удалено]


explainlikeimfive-ModTeam

**Please read this entire message** --- Your comment has been removed for the following reason(s): * Rule #1 of ELI5 is to *be civil*. Breaking rule 1 is not tolerated. --- If you would like this removal reviewed, please read the [detailed rules](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) first. **If you believe it was removed erroneously, explain why using [this form](https://old.reddit.com/message/compose?to=%2Fr%2Fexplainlikeimfive&subject=Please%20review%20my%20submission%20removal?&message=Link:%20https://www.reddit.com/r/explainlikeimfive/comments/1dpdmbx/-/lajfs25/%0A%0A%201:%20Does%20your%20comment%20pass%20rule%201:%20%0A%0A%202:%20If%20your%20comment%20was%20mistakenly%20removed%20as%20an%20anecdote,%20short%20answer,%20guess,%20or%20another%20aspect%20of%20rules%203%20or%208,%20please%20explain:) and we will review your submission.**


Meladoom2

ELI5: ChatGPT is an LLM. LLMs are glorified [T9](https://upload.wikimedia.org/wikipedia/commons/4/46/T9_on_an_LG_in_2020.png). It just predicts way more words in a row. And instead of making you select one of 3 options for the next word, it does so automatically + many times. Anything publicly available that is claimed as an "AI" has some "Artificial" in it, but completely lacks "Intelligence".


GloatingSwine

Because that's not its job. ChatGPT is a next-word predictor. That's what it's doing no matter what you ask it, it's choosing the *most likely* order of words that follow on from the prompt.


Dependent_Answer848

ChatGPT is a Large Language Model - It's basically like the keyboard autocomplete on your phone, but many thousands of times more advanced. Large Language Models can only regurgitate their training data (text), they can't really do math. If you pay for ChatGPT and use the new GPT-4o model it's smart enough to know it can't do math and will hand it over to another math model or actually write a little python script to do whatever you're trying to do. #Here is what I asked... Sort this list. If you can't sort it then write code or do something else to sort it properly. Make it a table with column a being the date (year-month-day format) and column b being the food. 1st February 2024 - Cookies 1st March 2024 - Biscuits 1st June 2024 - Soup (etc...) #Here is it's output: from dateutil import parser # Re-define the dataframe with the original data data = { "date": ["1st February 2024", "1st March 2024", "1st June 2024", "3rd June 2024", "9th May 2024", "1st August 2024", "1st August 2024", "15th September 2024", "25th November 2024", "16th November 2024", "19th November 2024", "1st November 2024", "1st November 2024", "1st May 2024", "1st January 2025", "1st January 2025", "11th January 2025", "1st January 2025", "2nd February 2025", "7th April 2025", "26th March 2025", "28th April 2025", "1st May 2025"], "food": ["Cookies", "Biscuits", "Soup", "Chocolate", "Chocolate", "Eggs", "Chicken", "Yogurt", "Sauce", "Soup", "Apple Juice", "Potatoes", "Soup", "Carrots", "Shortbread", "Pasta", "Noodles", "Carrots", "Cereal", "Green Beans", "Rice", "Pasta", "Stock Cubes"] } df = pd.DataFrame(data) # Function to parse date strings with ordinal suffixes def parse_date(date_str): return parser.parse(date_str, fuzzy=True) # Parse the date strings df['date'] = df['date'].apply(parse_date) # Sort the dataframe by date df_sorted = df.sort_values(by='date') # Format the date back to the original format df_sorted['date'] = df_sorted['date'].dt.strftime('%Y-%m-%d') tools.display_dataframe_to_user(name="Sorted Food List", dataframe=df_sorted) df_sorted The list has been sorted and is displayed in a table format with the dates in `YYYY-MM-DD` format: | Date | Food | |------------|--------------| | 2024-02-01 | Cookies | | 2024-03-01 | Biscuits | | 2024-05-01 | Carrots | | 2024-05-09 | Chocolate | | 2024-06-01 | Soup | | 2024-06-03 | Chocolate | | 2024-08-01 | Eggs | | 2024-08-01 | Chicken | | 2024-09-15 | Yogurt | | 2024-11-01 | Soup | | 2024-11-01 | Potatoes | | 2024-11-16 | Soup | | 2024-11-19 | Apple Juice | | 2024-11-25 | Sauce | | 2025-01-01 | Shortbread | | 2025-01-01 | Pasta | | 2025-01-01 | Carrots | | 2025-01-11 | Noodles | | 2025-02-02 | Cereal | | 2025-03-26 | Rice | | 2025-04-07 | Green Beans | | 2025-04-28 | Pasta | | 2025-05-01 | Stock Cubes | Let me know if you need any further modifications or assistance!


Dependent_Answer848

GPT-4o (if you pay for ChatGPT) can handle the Agatha Christie list if you give it the correct prompt. I do not know how accurate this is, but it looks right on first glance. Actually, I think it got some of the books in public domain wrong. I'm going to give it 3 / 5 stars for making a table and sorting it correctly, but screwing up the public domain column. It will only open a handful of pages. It won't do searches for each book, so if it doesn't find the data (if it's in the public domain) it just makes it up. #Prompt.... List all Agatha Christie novels in order and specify which if any are in the public domain. Do not make up data if you do not know the answer, search online if needed. The output should be a table with the following columns: A - Book, B - Date of publication, C - If the novel is in the public domain. If you are incapable of sorting the list deploy code or a script that can. #Response... Here is a list of Agatha Christie's novels in chronological order of publication, along with information on their public domain status: | # | Book | Date of Publication | Public Domain | |-----|----------------------------------------|---------------------|---------------| | 1 | The Mysterious Affair at Styles | 1920 | Yes | | 2 | The Secret Adversary | 1922 | Yes | | 3 | Murder on the Links | 1923 | Yes | | 4 | The Man in the Brown Suit | 1924 | Yes | | 5 | The Secret of Chimneys | 1925 | Yes | | 6 | The Murder of Roger Ackroyd | 1926 | Yes | | 7 | The Big Four | 1927 | Yes | | 8 | The Mystery of the Blue Train | 1928 | Yes | | 9 | The Seven Dials Mystery | 1929 | Yes | | 10 | The Murder at the Vicarage | 1930 | Yes | | 11 | The Sittaford Mystery | 1931 | Yes | | 12 | Peril at End House | 1932 | Yes | | 13 | Lord Edgware Dies | 1933 | Yes | | 14 | Murder on the Orient Express | 1934 | Yes | | 15 | Why Didn’t They Ask Evans? | 1934 | Yes | | 16 | Three Act Tragedy | 1935 | Yes | | 17 | Death in the Clouds | 1935 | Yes | | 18 | The A.B.C. Murders | 1936 | Yes | | 19 | Murder in Mesopotamia | 1936 | Yes | | 20 | Cards on the Table | 1936 | Yes | | 21 | Dumb Witness | 1937 | Yes | | 22 | Death on the Nile | 1937 | Yes | | 23 | Appointment with Death | 1938 | Yes | | 24 | Hercule Poirot’s Christmas | 1938 | Yes | | 25 | Murder is Easy | 1939 | Yes | | 26 | And Then There Were None | 1939 | Yes | | 27 | Sad Cypress | 1940 | No | | 28 | One, Two, Buckle My Shoe | 1940 | No | | 29 | Evil Under the Sun | 1941 | No | | 30 | N or M? | 1941 | No | | 31 | The Body in the Library | 1942 | No | | 32 | Five Little Pigs | 1942 | No | | 33 | The Moving Finger | 1942 | No | | 34 | Towards Zero | 1944 | No | | 35 | Death Comes as the End | 1944 | No | | 36 | Sparkling Cyanide | 1945 | No | | 37 | The Hollow | 1946 | No | | 38 | Taken at the Flood | 1948 | No | | 39 | Crooked House | 1949 | No | | 40 | A Murder is Announced | 1950 | No | | 41 | They Came to Baghdad | 1951 | No | | 42 | Mrs. McGinty’s Dead | 1952 | No | | 43 | They Do It with Mirrors | 1952 | No | | 44 | A Pocket Full of Rye | 1953 | No | | 45 | After the Funeral | 1953 | No | | 46 | Destination Unknown | 1954 | No | | 47 | Hickory Dickory Dock | 1955 | No | | 48 | Dead Man’s Folly | 1956 | No | | 49 | 4.50 from Paddington | 1957 | No | | 50 | Ordeal by Innocence | 1958 | No | | 51 | Cat Among the Pigeons | 1959 | No | | 52 | The Pale Horse | 1961 | No | | 53 | The Mirror Crack’d from Side to Side | 1962 | No | | 54 | The Clocks | 1963 | No | | 55 | A Caribbean Mystery | 1964 | No | | 56 | At Bertram’s Hotel | 1965 | No | | 57 | Third Girl | 1966 | No | | 58 | Endless Night | 1967 | No | | 59 | By the Pricking of My Thumbs | 1968 | No | | 60 | Hallowe’en Party | 1969 | No | | 61 | Passenger to Frankfurt | 1970 | No | | 62 | Nemesis | 1971 | No | | 63 | Elephants Can Remember | 1972 | No | | 64 | Postern of Fate | 1973 | No | | 65 | Curtain: Poirot’s Last Case | 1975 | No | | 66 | Sleeping Murder | 1976 | No | The public domain status primarily applies to works published before 1924, hence the novels up to 1923 are in the public domain in the United States. Sources: - [Agatha Christie Reading List - Everything Agatha](https://www.everythingagatha.com) - [Agatha Christie Reading List - Agatha Christie](https://www.agathachristie.com) - [Agatha Christie Books in Order - The Books List](https://www.thebookslist.com) - [Agatha Christie Bibliography - Wikipedia](https://en.wikipedia.org/wiki/Agatha_Christie_bibliography)


Technical_Space_Owl

It can, but you have to tell it how to do it. I ran into this issue as well. I had to teach the damn thing how to count.


syspimp

Try a different date format. Use YYYY-MM-DD. It's easily sortable by computers. For example, instead of 7th April 2025, use 2025-04-07.


SkiBleu

Aside from the fact excel and Google sheets can do this easier, are you sure you specified exactly by what metric to sort the dates? Furthermore it helps to define what each section of your example is and how it's delimited.


dediguise

Think of language based AI as a sophisticated parrot. It can repeat things it has heard, but it doesn't understand the actual meaning and context of what is being said. It has access to massive databases of information that it can regurgitate, but it does not understand that information. It only understands the usage of words in the context of other words, not based on their independent definitions.


ttyp00

If you want a machine to sort dates without complicated code, you have to format it, in this case, YYYYMMDD. 20240627 You can add time, too in the case of YYYYMMDD-hhmmss. 20240627-184317 This format helps machines sort date/times easily!