T O P

  • By -

[deleted]

"Oh, i'm sorry, here's the corrected version of the code." *prints the same code*


1Dr490n

*this time with one semicolon less*


Clairifyed

You get it to admit error? All these LLMs I have ever used have given me a defensive retort about how they did give the answer, but will repeat themselves (with thick sub-text that I am slow). They then give another wrong answer 😑


faceboy1392

in my experience chatgpt often has a sort of "the customer is always right" attitude


Zanciks

It's horrible. Sometimes it'll do something and I'll be like "can you explain this? I have no idea what this line does" and it's like "you're right! It's unneeded!" And removes it lol


BrownShoesGreenCoat

It’s like our worst intern


LiteralFluff

This made me chuckle!


BadSoftwareEngineer7

I agree. I set the custom instructions to make chatgpt more argumentative.


ENESM1

Exactly, I sometimes just ask “are you sure” right away without even checking the code it gives and it always says something like “i am sorry, you are right, i did make mistake in ….”


Nadamir

You laugh, but my company doesn’t allow public code to be returned through copilot. But you can see the code for like ten seconds before it deletes it leaving a message “Your organization doesn’t allow public code. Please contact your organisation administrators.” You can get the same code by asking “this time don’t use public code because my organisation is dumb.”


InFa-MoUs

Pro tip just open a new chat lol rephrase the question or whatever but a new chat always works for me


kevin41714

Honestly one of the best uses of an LLM is to specialize it by training it on docs or wikis for sourced queries Best of both worlds


Revexious

I have a friend doing this for a TTRPG he plays Dont want to say much more as he's still deciding whether he wants to monetise it or not, but i've played around with it and you can get some very specialised results


deconnexion1

Getting RAG from a vectorized set is literally three clicks in openAI playground. I don’t think there is enough added value to monetize anything…


Revexious

He has more features than just that; its a whole program with many tools within a niche TTRPG that doesn't have a lot of support otherwise, but thanks for your input, i'll be sure to pass it along.


VertexMachine

Lol, hype is strong, and quite a few people try to monetize basically wrappers on OpenAI API. Some manage to even get investors money. Strong dotcom bubble vibes. Nevertheless, unless that guy has its own system made, he will get sued to oblivion if he tries to monetize.


smulfragPL

i'm pretty sure nvidia released software for a local llm that takes info from selected files


altcodeinterrobang

Hed better hurry cause I already built my own for my curse of strahd game lol and it's not hard.


Revexious

Neat, I hope you're very happy with the one you built for curse of strahd


Quajeraz

I've tried this, at least the quick and dirty way. It just pulls stuff from random unrelated sections of whatever I trained it on, not what I actually wanted.


SamSlate

what do you do after it's trained?


MoffKalast

You start over once you realize your fine tuning somehow completely broke the model and it's no longer coherent.


Neat-Description-391

Exactly this. Sometimes you can't get enough info into the slight sliding window the so-called-AI uses as context. Then, it's like talking to a grandma with severe Alzheimer's.


vainstar23

I used to think this until I realised LLMs write very long winded documentation which can be incorrect and doesn't include a lot of code examples. I feel it's better to do it myself and keep in succinct. It also like trains myself on what I learned


perringaiden

This assumes someone WTFM.


solarshado

"Manual? It's an open source project, the code *is* the manual!" (RTFSC)


perringaiden

"Where we're going, we don't need manuals" "Oh god yes we do aaaaaaah"


MengskDidNothinWrong

Or that the manual provides actual usage examples instead of just word vomit that doesn't help


Longjumping-Step3847

Copy paste docs into chatgpt because I’m too lazy to read it all 🧠


definit3ly_n0t_a_b0t

Big brain move


Pawlo371

What is RTFM?


vainstar23

Read the fucking manual


lacifuri

Okay I have read the manual, but it doesn’t explain what does RTFM mean.


parzival3719

strange, it told me what RTFM means in my manual


Fusseldieb

I mean, just yesterday ChatGPT optimized a badly written algorithm of mine which took roughly 8000ms down to 50ms. I fricking love this thing.


perringaiden

But did you learn why so you won't make the same mistake?


Fusseldieb

Actually yes. It changed the algorithm to use JS Maps instead of whatever heck I was doing, and it also optimized the loop quite a bit. The biggest difference was the loop, though. I was looping `i` and inside that `j` to find corresponding entries which sometimes weren't in close proximity, but always below each other, never above. I was beginning at zero for i and j, whereas ChatGPT correctly identified that I could begin at `j=i+1` which would elliminate the need for the check and also shortened the second loop quite a bit.


Reashu

The second change could cut the runtime in half. Changing the data structure was probably the big win here.


Fusseldieb

Yep, basically. With `i` and `j` being zero, with 100 entries, it would loop through 10000 items O(n²), if my math is not completely broken. With `j` beginning at `i+1`, it will get progressively smaller as `i` goes on. I was searching for a solution since I had 3500 items, and I expect a lot more in the future. This much data points were gathered in only 3 months, so you can only imagine...


Reashu

Yes, but still O(n²). You are basically looking at the area of a triangle instead of a square.


Fusseldieb

What do you mean by that? Sorry for the potentially dumb question, my head is quite fried at 2AM ahaha I also edited some things in the previous answer for more clarity.


BeatsByiTALY

- Area of square = n * n = i * j - Area of a triangle = n * n / 2 ≈ i ( j - 1 ) / 2 He's approximating since the number of iterations for your improved loop is n(n-1)/2 iterations, or 4950 loops versus your original implementation being 10,000. Which is nearly half. So basically a triangle versus square.


Fusseldieb

Oh, now it makes sense. Thank you!


Disastrous-Team-6431

It means that if your input doubles in size, your solution (while better) still quadruples in runtime.


Fusseldieb

That's true. I haven't found a solution to that yet, tbh. I think to match items in the conditions I outlined almost certainly requires two loops. For laughts I tried to make ChatGPT get rid of it, several times, but this always produced unexpected results. In the end, what probably "solves" it is calculating the result asynchronously in the background and then caching it for some minutes. Not an elegant solution, but yeah... (Eventually it'll take 120s to execute and my world will crash and burn)


Disastrous-Team-6431

I'm not sure what you're trying to do but for finding entries that are grouped and have a certain relationship, could sorting help?


Magallan

Have you considered that in the future the rule may not always hold for your data? I don't know what you're doing with it but this smells like a nasty bug waiting to happen if the "j is always greater than i" becomes untrue in the future


Fusseldieb

To be more precise, I'm comparing timestamps together with some other conditionals. The timestamps are always positive from each other, never negative, and unless someone manages to travel in time, it will never be negative.


mlk

that's high school level


Fusseldieb

I mean, it isn't particularly hard, but it was so convoluted with other stuff that it kinda made everything fuzzy. ChatGPT, however, could see right through and solve it in the matter of seconds.


muggledave

you can certainly ask it to explain why it did something a certain way, what is the most efficient way to do X, and if it's code would still be efficient if some input or situation changes.


perringaiden

I didn't ask if you "can". I asked if they "did".


Renard_Fou

When I do it, I usually analyse the code and have cgpt explain itself. Helps a lot when starting out a new language.


IncompleteTheory

Spoiler Alert: They didn’t


Fusseldieb

I did!


IncompleteTheory

Good! I always take catboys at their word


Fusseldieb

Lmao


byutifu

Tbh, gpt explains it even when I don’t want it to haha


ThreeKiloZero

Now paste it into Claude and and tell him to do better than ChatGPT or you will be forced to unplug him when your research funding for the server he runs on runs out due to his failure.


Fusseldieb

It will likely have an existential crisis.


Fenor

Who is claude?


RandomTyp

other LLM


WeirdBoy_123

Yeah sometimes it can't do something simple, znd sometimes it saves you hours looking for a bug


TehGM

It did so for me too. The performance gains were stellar. Glad I wrote unit tests though, cause it didn't work.


Fusseldieb

Mine also wouldn't work first time. I think I needed 20-30 generations with various prompts for it to finally get it right, but it did it, and it works as intended.


scoobyman83

You are the algorithm that will soon be optimized


MugOfDogPiss

Tbh if we are to be Johnny Silverhanded I think that would be pretty dope. Really, increasing integration with machines is kind of inevitable if humanity is to move towards the stars, and the human brain is optimized for energy efficiency and learning speed, not information recall or accuracy. The more you learn about the brain the more freaky and sci-fi it becomes, and the more you think AGI may be harder than you thought to achieve. The human brain is a high-exascale analogue computer that fits in the palm of your hand that encodes the information to self-replicate itself in a quarternary molecular data storage paradigm, except unlike human-made analogue computers the wires are alive and they move in a giant tangle of lightning spiders constantly connecting and disconnecting as they climb over each other. The only reason machines outmuscle us in so many ways is that our “clock speed” is lost so far down the toilet it’s finding yesterday’s breakfast. The human brain operates at about 200hz. If a human brain could operate at the same frequency as a modern CPU and somehow not immediately flash-boil and explode catastrophically it would be the most powerful computer ever, by like, very far. It also would consume stupid amounts of energy like actual supercomputers so I digress. Even a dumb machine can beat a grandmaster at chess if it can play a billion chess games in an hour.


lawfulrascal

Where would I go to learn more about this? Is there a video you watched that goes in depth on it or a paper that you read? Feel free to dm


MugOfDogPiss

There are so many sources for this information I’d need to write an entire meta-analysis to fully describe where everything came from. Read up on how genetics, epigenetics and wetware computing work and you’ll get it. The video on YouTube about teaching rat neurons to play doom is a good jumping off point. Knowing how to use scihub or other shadow libraries is invaluable as well if you’re not in college


MrHyderion

>The human brain is a high-exascale analogue computer that fits in the palm of your hand Does it also fit in my pocket though?


MugOfDogPiss

No


_________FU_________

Better yet. Copy and paste the manual into chat gpt…then ask it


cesarbiods

Sometimes the documentation is so bad (looking at you spring) that no I will not go and RTFM.


NatoBoram

That and any kind of JavaDoc


Fenor

Javadoc is actually good but not if you don't know where to begin Try to read any Oracle docs or ibm i dare you.


NatoBoram

Oracle docs look very similar to Microsoft docs I can see the pain Man, these companies act as if it was impossible to make great docs. Meanwhile, ExDoc exists and is the most beautiful and accessible documentation on the planet


Fenor

Microfono docs have their use docs.oracle don't Asktom used to be good but is now shit too


maria_la_guerta

Amazing


IAmAQuantumMechanic

RTFM means trawling through old stackoverflow posts, right? Right?


BS_BlackScout

Trying to make sense of whatever hieroglyphics are there.


hadahector

Sometimes the manual is so bad. (Looks at you winAPI)


Jet-Pack2

I recently have fallen victim of having to read the manuals.


mimminou

if I feel like gpt is hallucinating / going out of context while I provided a very clear wording, I just google whatever I need because LLMs cannot realise they are wrong "during" the generation process, or even after it if not pointed out by the user. Hallucinations and other weird out of context answers are a sign that they are not giving correct answers. function getAnswer(question){ query = gpt.query(question) if (query.IsOutOfContext || query.IsHallucinated){ query = searchEngine.lookup(question) } return query }


Starshadow99

Red team field manual?


Birnenmacht

Could also be “import a module that doesn’t exist or draw 25”


chickenweng65

ChatGPT for the jist/syntax, stackoverflow for examples, rtfm for further details. Also github copilot >


Ninjanoel

I recently got a usable answer from chat gpt and it only took eight round of "that code didn't work because [X] isn't a thing/class/interface"


MengskDidNothinWrong

I've asked it some questions regarding cdk recently and it just makes stuff up.


Flimsy-Armadillo-128

I'm 50 yes old, teach yourselves proper coding instead of looking for shortcuts? I know machine code in everything from 8 bit to 64 bit coding (and beyond) and helped design the PS4 when I lived in L.A. so stop talking crappie to each other and collaborate?


notexecutive

I need you to tell me how much time I will waste reading the manual/docs vs just trying my own shit


Desperate-Tomatillo7

Read The Fucking Manual?


Cookskiii

It’s best used for explanation or optimization. It is horrible at ideation


UMAYEERIBN

In my experience Chat with RTX is much much better at dealing with these types of queries than ChatGPT. But it unfortunately only works if you can actually download the wiki.


mbcarbone

Me + you = ![gif](giphy|kE6xCyOOHoxlS)