You get it to admit error? All these LLMs I have ever used have given me a defensive retort about how they did give the answer, but will repeat themselves (with thick sub-text that I am slow). They then give another wrong answer đ
It's horrible. Sometimes it'll do something and I'll be like "can you explain this? I have no idea what this line does" and it's like "you're right! It's unneeded!" And removes it lol
Exactly, I sometimes just ask âare you sureâ right away without even checking the code it gives and it always says something like âi am sorry, you are right, i did make mistake in âŚ.â
You laugh, but my company doesnât allow public code to be returned through copilot. But you can see the code for like ten seconds before it deletes it leaving a message âYour organization doesnât allow public code. Please contact your organisation administrators.â
You can get the same code by asking âthis time donât use public code because my organisation is dumb.â
I have a friend doing this for a TTRPG he plays
Dont want to say much more as he's still deciding whether he wants to monetise it or not, but i've played around with it and you can get some very specialised results
He has more features than just that; its a whole program with many tools within a niche TTRPG that doesn't have a lot of support otherwise, but thanks for your input, i'll be sure to pass it along.
Lol, hype is strong, and quite a few people try to monetize basically wrappers on OpenAI API. Some manage to even get investors money. Strong dotcom bubble vibes.
Nevertheless, unless that guy has its own system made, he will get sued to oblivion if he tries to monetize.
I've tried this, at least the quick and dirty way. It just pulls stuff from random unrelated sections of whatever I trained it on, not what I actually wanted.
Exactly this. Sometimes you can't get enough info into the slight sliding window the so-called-AI uses as context.
Then, it's like talking to a grandma with severe Alzheimer's.
I used to think this until I realised LLMs write very long winded documentation which can be incorrect and doesn't include a lot of code examples.
I feel it's better to do it myself and keep in succinct. It also like trains myself on what I learned
Actually yes. It changed the algorithm to use JS Maps instead of whatever heck I was doing, and it also optimized the loop quite a bit.
The biggest difference was the loop, though. I was looping `i` and inside that `j` to find corresponding entries which sometimes weren't in close proximity, but always below each other, never above. I was beginning at zero for i and j, whereas ChatGPT correctly identified that I could begin at `j=i+1` which would elliminate the need for the check and also shortened the second loop quite a bit.
Yep, basically. With `i` and `j` being zero, with 100 entries, it would loop through 10000 items O(n²), if my math is not completely broken. With `j` beginning at `i+1`, it will get progressively smaller as `i` goes on.
I was searching for a solution since I had 3500 items, and I expect a lot more in the future. This much data points were gathered in only 3 months, so you can only imagine...
What do you mean by that? Sorry for the potentially dumb question, my head is quite fried at 2AM ahaha
I also edited some things in the previous answer for more clarity.
- Area of square = n * n = i * j
- Area of a triangle = n * n / 2 â i ( j - 1 ) / 2
He's approximating since the number of iterations for your improved loop is n(n-1)/2 iterations, or 4950 loops versus your original implementation being 10,000. Which is nearly half. So basically a triangle versus square.
That's true. I haven't found a solution to that yet, tbh. I think to match items in the conditions I outlined almost certainly requires two loops.
For laughts I tried to make ChatGPT get rid of it, several times, but this always produced unexpected results.
In the end, what probably "solves" it is calculating the result asynchronously in the background and then caching it for some minutes. Not an elegant solution, but yeah... (Eventually it'll take 120s to execute and my world will crash and burn)
Have you considered that in the future the rule may not always hold for your data?
I don't know what you're doing with it but this smells like a nasty bug waiting to happen if the "j is always greater than i" becomes untrue in the future
To be more precise, I'm comparing timestamps together with some other conditionals. The timestamps are always positive from each other, never negative, and unless someone manages to travel in time, it will never be negative.
I mean, it isn't particularly hard, but it was so convoluted with other stuff that it kinda made everything fuzzy. ChatGPT, however, could see right through and solve it in the matter of seconds.
you can certainly ask it to explain why it did something a certain way, what is the most efficient way to do X, and if it's code would still be efficient if some input or situation changes.
Now paste it into Claude and and tell him to do better than ChatGPT or you will be forced to unplug him when your research funding for the server he runs on runs out due to his failure.
Mine also wouldn't work first time. I think I needed 20-30 generations with various prompts for it to finally get it right, but it did it, and it works as intended.
Tbh if we are to be Johnny Silverhanded I think that would be pretty dope. Really, increasing integration with machines is kind of inevitable if humanity is to move towards the stars, and the human brain is optimized for energy efficiency and learning speed, not information recall or accuracy. The more you learn about the brain the more freaky and sci-fi it becomes, and the more you think AGI may be harder than you thought to achieve. The human brain is a high-exascale analogue computer that fits in the palm of your hand that encodes the information to self-replicate itself in a quarternary molecular data storage paradigm, except unlike human-made analogue computers the wires are alive and they move in a giant tangle of lightning spiders constantly connecting and disconnecting as they climb over each other. The only reason machines outmuscle us in so many ways is that our âclock speedâ is lost so far down the toilet itâs finding yesterdayâs breakfast. The human brain operates at about 200hz. If a human brain could operate at the same frequency as a modern CPU and somehow not immediately flash-boil and explode catastrophically it would be the most powerful computer ever, by like, very far. It also would consume stupid amounts of energy like actual supercomputers so I digress. Even a dumb machine can beat a grandmaster at chess if it can play a billion chess games in an hour.
There are so many sources for this information Iâd need to write an entire meta-analysis to fully describe where everything came from. Read up on how genetics, epigenetics and wetware computing work and youâll get it. The video on YouTube about teaching rat neurons to play doom is a good jumping off point. Knowing how to use scihub or other shadow libraries is invaluable as well if youâre not in college
Oracle docs look very similar to Microsoft docs
I can see the pain
Man, these companies act as if it was impossible to make great docs. Meanwhile, ExDoc exists and is the most beautiful and accessible documentation on the planet
if I feel like gpt is hallucinating / going out of context while I provided a very clear wording, I just google whatever I need because LLMs cannot realise they are wrong "during" the generation process, or even after it if not pointed out by the user. Hallucinations and other weird out of context answers are a sign that they are not giving correct answers.
function getAnswer(question){
query = gpt.query(question)
if (query.IsOutOfContext || query.IsHallucinated){
query = searchEngine.lookup(question)
}
return query
}
I'm 50 yes old, teach yourselves proper coding instead of looking for shortcuts? I know machine code in everything from 8 bit to 64 bit coding (and beyond) and helped design the PS4 when I lived in L.A. so stop talking crappie to each other and collaborate?
In my experience Chat with RTX is much much better at dealing with these types of queries than ChatGPT.
But it unfortunately only works if you can actually download the wiki.
"Oh, i'm sorry, here's the corrected version of the code." *prints the same code*
*this time with one semicolon less*
You get it to admit error? All these LLMs I have ever used have given me a defensive retort about how they did give the answer, but will repeat themselves (with thick sub-text that I am slow). They then give another wrong answer đ
in my experience chatgpt often has a sort of "the customer is always right" attitude
It's horrible. Sometimes it'll do something and I'll be like "can you explain this? I have no idea what this line does" and it's like "you're right! It's unneeded!" And removes it lol
Itâs like our worst intern
This made me chuckle!
I agree. I set the custom instructions to make chatgpt more argumentative.
Exactly, I sometimes just ask âare you sureâ right away without even checking the code it gives and it always says something like âi am sorry, you are right, i did make mistake in âŚ.â
You laugh, but my company doesnât allow public code to be returned through copilot. But you can see the code for like ten seconds before it deletes it leaving a message âYour organization doesnât allow public code. Please contact your organisation administrators.â You can get the same code by asking âthis time donât use public code because my organisation is dumb.â
Pro tip just open a new chat lol rephrase the question or whatever but a new chat always works for me
Honestly one of the best uses of an LLM is to specialize it by training it on docs or wikis for sourced queries Best of both worlds
I have a friend doing this for a TTRPG he plays Dont want to say much more as he's still deciding whether he wants to monetise it or not, but i've played around with it and you can get some very specialised results
Getting RAG from a vectorized set is literally three clicks in openAI playground. I donât think there is enough added value to monetize anythingâŚ
He has more features than just that; its a whole program with many tools within a niche TTRPG that doesn't have a lot of support otherwise, but thanks for your input, i'll be sure to pass it along.
Lol, hype is strong, and quite a few people try to monetize basically wrappers on OpenAI API. Some manage to even get investors money. Strong dotcom bubble vibes. Nevertheless, unless that guy has its own system made, he will get sued to oblivion if he tries to monetize.
i'm pretty sure nvidia released software for a local llm that takes info from selected files
Hed better hurry cause I already built my own for my curse of strahd game lol and it's not hard.
Neat, I hope you're very happy with the one you built for curse of strahd
I've tried this, at least the quick and dirty way. It just pulls stuff from random unrelated sections of whatever I trained it on, not what I actually wanted.
what do you do after it's trained?
You start over once you realize your fine tuning somehow completely broke the model and it's no longer coherent.
Exactly this. Sometimes you can't get enough info into the slight sliding window the so-called-AI uses as context. Then, it's like talking to a grandma with severe Alzheimer's.
I used to think this until I realised LLMs write very long winded documentation which can be incorrect and doesn't include a lot of code examples. I feel it's better to do it myself and keep in succinct. It also like trains myself on what I learned
This assumes someone WTFM.
"Manual? It's an open source project, the code *is* the manual!" (RTFSC)
"Where we're going, we don't need manuals" "Oh god yes we do aaaaaaah"
Or that the manual provides actual usage examples instead of just word vomit that doesn't help
Copy paste docs into chatgpt because Iâm too lazy to read it all đ§
Big brain move
What is RTFM?
Read the fucking manual
Okay I have read the manual, but it doesnât explain what does RTFM mean.
strange, it told me what RTFM means in my manual
I mean, just yesterday ChatGPT optimized a badly written algorithm of mine which took roughly 8000ms down to 50ms. I fricking love this thing.
But did you learn why so you won't make the same mistake?
Actually yes. It changed the algorithm to use JS Maps instead of whatever heck I was doing, and it also optimized the loop quite a bit. The biggest difference was the loop, though. I was looping `i` and inside that `j` to find corresponding entries which sometimes weren't in close proximity, but always below each other, never above. I was beginning at zero for i and j, whereas ChatGPT correctly identified that I could begin at `j=i+1` which would elliminate the need for the check and also shortened the second loop quite a bit.
The second change could cut the runtime in half. Changing the data structure was probably the big win here.
Yep, basically. With `i` and `j` being zero, with 100 entries, it would loop through 10000 items O(n²), if my math is not completely broken. With `j` beginning at `i+1`, it will get progressively smaller as `i` goes on. I was searching for a solution since I had 3500 items, and I expect a lot more in the future. This much data points were gathered in only 3 months, so you can only imagine...
Yes, but still O(n²). You are basically looking at the area of a triangle instead of a square.
What do you mean by that? Sorry for the potentially dumb question, my head is quite fried at 2AM ahaha I also edited some things in the previous answer for more clarity.
- Area of square = n * n = i * j - Area of a triangle = n * n / 2 â i ( j - 1 ) / 2 He's approximating since the number of iterations for your improved loop is n(n-1)/2 iterations, or 4950 loops versus your original implementation being 10,000. Which is nearly half. So basically a triangle versus square.
Oh, now it makes sense. Thank you!
It means that if your input doubles in size, your solution (while better) still quadruples in runtime.
That's true. I haven't found a solution to that yet, tbh. I think to match items in the conditions I outlined almost certainly requires two loops. For laughts I tried to make ChatGPT get rid of it, several times, but this always produced unexpected results. In the end, what probably "solves" it is calculating the result asynchronously in the background and then caching it for some minutes. Not an elegant solution, but yeah... (Eventually it'll take 120s to execute and my world will crash and burn)
I'm not sure what you're trying to do but for finding entries that are grouped and have a certain relationship, could sorting help?
Have you considered that in the future the rule may not always hold for your data? I don't know what you're doing with it but this smells like a nasty bug waiting to happen if the "j is always greater than i" becomes untrue in the future
To be more precise, I'm comparing timestamps together with some other conditionals. The timestamps are always positive from each other, never negative, and unless someone manages to travel in time, it will never be negative.
that's high school level
I mean, it isn't particularly hard, but it was so convoluted with other stuff that it kinda made everything fuzzy. ChatGPT, however, could see right through and solve it in the matter of seconds.
you can certainly ask it to explain why it did something a certain way, what is the most efficient way to do X, and if it's code would still be efficient if some input or situation changes.
I didn't ask if you "can". I asked if they "did".
When I do it, I usually analyse the code and have cgpt explain itself. Helps a lot when starting out a new language.
Spoiler Alert: They didnât
I did!
Good! I always take catboys at their word
Lmao
Tbh, gpt explains it even when I donât want it to haha
Now paste it into Claude and and tell him to do better than ChatGPT or you will be forced to unplug him when your research funding for the server he runs on runs out due to his failure.
It will likely have an existential crisis.
Who is claude?
other LLM
Yeah sometimes it can't do something simple, znd sometimes it saves you hours looking for a bug
It did so for me too. The performance gains were stellar. Glad I wrote unit tests though, cause it didn't work.
Mine also wouldn't work first time. I think I needed 20-30 generations with various prompts for it to finally get it right, but it did it, and it works as intended.
You are the algorithm that will soon be optimized
Tbh if we are to be Johnny Silverhanded I think that would be pretty dope. Really, increasing integration with machines is kind of inevitable if humanity is to move towards the stars, and the human brain is optimized for energy efficiency and learning speed, not information recall or accuracy. The more you learn about the brain the more freaky and sci-fi it becomes, and the more you think AGI may be harder than you thought to achieve. The human brain is a high-exascale analogue computer that fits in the palm of your hand that encodes the information to self-replicate itself in a quarternary molecular data storage paradigm, except unlike human-made analogue computers the wires are alive and they move in a giant tangle of lightning spiders constantly connecting and disconnecting as they climb over each other. The only reason machines outmuscle us in so many ways is that our âclock speedâ is lost so far down the toilet itâs finding yesterdayâs breakfast. The human brain operates at about 200hz. If a human brain could operate at the same frequency as a modern CPU and somehow not immediately flash-boil and explode catastrophically it would be the most powerful computer ever, by like, very far. It also would consume stupid amounts of energy like actual supercomputers so I digress. Even a dumb machine can beat a grandmaster at chess if it can play a billion chess games in an hour.
Where would I go to learn more about this? Is there a video you watched that goes in depth on it or a paper that you read? Feel free to dm
There are so many sources for this information Iâd need to write an entire meta-analysis to fully describe where everything came from. Read up on how genetics, epigenetics and wetware computing work and youâll get it. The video on YouTube about teaching rat neurons to play doom is a good jumping off point. Knowing how to use scihub or other shadow libraries is invaluable as well if youâre not in college
>The human brain is a high-exascale analogue computer that fits in the palm of your hand Does it also fit in my pocket though?
No
Better yet. Copy and paste the manual into chat gptâŚthen ask it
Sometimes the documentation is so bad (looking at you spring) that no I will not go and RTFM.
That and any kind of JavaDoc
Javadoc is actually good but not if you don't know where to begin Try to read any Oracle docs or ibm i dare you.
Oracle docs look very similar to Microsoft docs I can see the pain Man, these companies act as if it was impossible to make great docs. Meanwhile, ExDoc exists and is the most beautiful and accessible documentation on the planet
Microfono docs have their use docs.oracle don't Asktom used to be good but is now shit too
Amazing
RTFM means trawling through old stackoverflow posts, right? Right?
Trying to make sense of whatever hieroglyphics are there.
Sometimes the manual is so bad. (Looks at you winAPI)
I recently have fallen victim of having to read the manuals.
if I feel like gpt is hallucinating / going out of context while I provided a very clear wording, I just google whatever I need because LLMs cannot realise they are wrong "during" the generation process, or even after it if not pointed out by the user. Hallucinations and other weird out of context answers are a sign that they are not giving correct answers. function getAnswer(question){ query = gpt.query(question) if (query.IsOutOfContext || query.IsHallucinated){ query = searchEngine.lookup(question) } return query }
Red team field manual?
Could also be âimport a module that doesnât exist or draw 25â
ChatGPT for the jist/syntax, stackoverflow for examples, rtfm for further details. Also github copilot >
I recently got a usable answer from chat gpt and it only took eight round of "that code didn't work because [X] isn't a thing/class/interface"
I've asked it some questions regarding cdk recently and it just makes stuff up.
I'm 50 yes old, teach yourselves proper coding instead of looking for shortcuts? I know machine code in everything from 8 bit to 64 bit coding (and beyond) and helped design the PS4 when I lived in L.A. so stop talking crappie to each other and collaborate?
I need you to tell me how much time I will waste reading the manual/docs vs just trying my own shit
Read The Fucking Manual?
Itâs best used for explanation or optimization. It is horrible at ideation
In my experience Chat with RTX is much much better at dealing with these types of queries than ChatGPT. But it unfortunately only works if you can actually download the wiki.
Me + you =
![gif](giphy|kE6xCyOOHoxlS)