I had a detailed conversion with chatGPT about how to gracefully handle terminating conditions of a rust program. It summarized cogently to register at_exit()’s for each thread, panic handlers, and register signal handlers. It advised and explain in detail on my query about the thread handling for each of these variants, gave really helpful advice on collecting join handles in a closure on the main thread and waiting for the child threads to exit their at_exit handlers since at_exit can’t guarantee when handlers will execute. It went into detail about cases the process won’t have the ability to clean up. I was able to ask it a lot of clarifying questions and it provided useful responses with clear coherent explanations that were salient and considered the full context of the discussion. I’m certain when I go to actually implement it it’ll have gotten so details wrong. But it provided about as clear explanation of process termination mechanics (for Unix) as I’ve seen articulated, and did so in a way that was directed by my questions not in a 300 page reference manual or random semi relevant questions in stackoverflow answered by partially right contributors.

If this is a con, then consider me a mark.

I would greatly appreciate a moratorium on this genre of article until there is compelling accompanying evidence that a meaningful portion of ChatGPT's users are unaware of these shortcomings. I have yet to encounter or even hear of a non-technical person playing around with ChatGPT without stumbling into the type of confidently-stated absurdities and half-truths displayed in this article, and embracing that as a limitation of the tool.

It seems to me that the overwhelming majority of people working with ChatGPT are aware of the "con" described in this article -- even if they view it as a black box, like Google, and lack a top-level understanding of how an LLM works. Far greater misperceptions around ChatGPT prevail than the idea that it is an infallible source of knowledge.

I'm in my 30s, so I remember the very early days of Wikipedia and the crisis of epistemology it seemed to present. Can you really trust an encyclopedia anyone can edit? Well, yes and no -- it's a bit like a traditional encyclopedia in that way. The key point to observe is that two decades on, we're still using it, a lot, and the trite observation that it "could be wrong" has had next to no bearing on its social utility. Nor have repeated observations to that effect tended to generate much intellectually stimulating conversation.

So yeah, ChatGPT gets stuff wrong. That's the least interesting part of the story.

My stance is pretty simple.

The folks that adapt their own language centers and domain reasoning around using chatGPT (or these types of models) will stand to gain the most out of using them.

This article is an eye roll to me, a calculator gives you confidence as well, doesn't mean you used it correctly.

It is very hard for me to not outright dismiss articles like this that don't consider the usefulness of the tool. They instead search for every possible way to dismiss the tool.

>My conclusion isn’t just that ChatGPT is another con game—it’s the biggest one of them all.


I don't think GPT is a con, it's doing exactly what it was trined to do. I think the problem is people put false confidence into it. Because it appears to give correct information, ChatGPT has been put on this pedestal by the non-tech world as being some revolution. In fact it's not a revolution, they just figured out how to build a chatbot that returns convincing statements that sounds human, correct information is not it's' strong suit, sounding smooth in a conversation is.
I have a simple canary for ChatGPT correctness that I ask every time it's updated: "What can you tell me about Ice Cold In Alex?" / "Who did Sylvia Syms play?"

I'm not expecting it to get the answer right (I don't think it has that information) but I'm hoping it'll eventually just admit it doesn't know instead of making up something plausible ("Sister Margaret Parker" last time I tried).

As long as it doesn't know what it doesn't know, I'm inclined to think of it as a super-advanced Markov chain. Useful, impressive, but still basically a statistical trick.

The more I work with LLMs, the more I think of them as plagiarization engines. They do to text what a bitcoin tumbler does to bitcoins: slice them up and recombine them so that it's difficult to trace any specific part of the output to any specific part of the input.

It's not a perfect analogy, but it's useful in that it produces correct answers about what LLMs are and aren't good for. For example, the reason they make better chatbots than novelists is because slicing-and-recombining text from your documentation is a great way to answer customer product questions, but slicing-and-recombining text from old novels is a lousy way to write a novel.

My brain doesn't learn anything easily. I have to ask constant questions to the point of annoying embarrassment in class, and books of course only say what they say.

So it was wonderful yesterday to pick ChatGPT's brain and just drill down asking more and more questions about a topic in biology until my brain started to get it.

Assuming the answers are accurate, this is revolutionary for me personally in independent study. I may finally grasp so much that I missed in school.

Also, when I am reading books, ChatGPT may be able to answer questions the book does not.

The way I've come to look at ChatGPT is via a D&D analogy.

It's like a helpful Bard with 1 rank in all the knowledge skills and a good bluff roll.

It'll give you good answers to a lot of basic queries, but if it doesn't know, it'll just make up something and provide that.

Once you know that, I think it can be a lot of use and in many way, I think it'll get a lot better with time.

I've already found it useful in basic programming tasks, specifically where I know how to do something in one language but not another, it can give me the equivalent code easily.

After having played it ChatGPT for a bit, mostly asking computer questions, I've had mixed results. Some are amazing, others are gibberish.

But what struck me the other day is a couple of quotes from, of all things, Galaxy Quest which seem particularly apt.

  "May I remind you that this man is wearing a costume, not a uniform."

  "You know, with all that makeup and stuff, I actually thought you were SMART for a second."
As amazing as it is, as progressive as it is, it's still a magic trick.
I wonder if the biggest shortcoming of GPT right now is not that it sometimes gets things wrong, or can't cite its sources, or whatever - maybe it needs to learn when to say "I don't know the answer to that question".

That's a pretty hard thing for most humans (and myself) to learn to say, and I suspect GPT's training data (tha internet) doesn't include a lot of "I'm not sure" language and probably does include a lot of "I'm definitely sure and definitely totally correct" language (maybe, I guess, no evidence to back up that suggestion, I'm not sure).

Many of my favorite coworkers, friends, doctors, pundits are trustworthy exactly because they work hard to not profess knowledge they are unsure about. The reason (IMO) that Scott Alexander is a jewel of the internet is because of the way he quantifies uncertainty when working through a topic.

Maybe I missed the memo but why isn't anyone impressed that a computer can generate well formed prose in response to arbitrary questions? It seems like we've completely leaped over that as an achievement and are now arguing over how it's confidently wrong or how there are emergent patterns in what it has to say. No one is claiming it's a general intelligence but it's still amazingly impressive.
I'm finding the analytic-synthetic distinction to be somewhat useful, even if it veers in important ways from how these terms were defined and used by Kant/Frege/Quine, etc.

Roughly, if the prompt is "analytic", that is contains all the necessary facts for the expected output, then the tool is much more reliable.

If the prompt is "synthetic", that is it contingent on outside facts, then the tool is much less reliable.

Part of me thinks one of the big reasons Google has held back so much is because of ethical concerns and/or just general fear of not having complete knowledge of how AI (incomplete to boot) will impact the world. We know that Google has some extremely powerful AI, but they never let it out of the lab. Just the most heavily neutered and clamped versions to help accentuate their existing products.

Now it seems that Open.AI/Microsoft are ready to jump in, caution to the wind. As you would expect the chance for a competitive advantage will always overwhelm external concerns.

We'll see what Google does. They might say "fuck it" and finally give us a chance to play with whatever their top tier AI is. Or maybe they'll discredit it and try and compete with their current (ad optimized) search product. We'll see, but I am definitely curious to see how Google responds to all this.

Two things are correct at the same time:

* ChatGPT can make mistakes very confidently

* ChatGPT is incredibly useful in a way that no other tool has ever been, with a jump in effectiveness for natural language interaction that is mindblowing

"Cars won't replace horses, because they require roads and horses don't."
Pretty cool that GTP is hitting such a mainstream moment. Everyone I talk with about it has glazed over for years, but I guess this is finally a demo that breaks through. 100m users if reports are accurate.

Of course regular folks are going to wildly overestimate GTP’s current capabilities. Regular folks wildly overestimate the intelligence of their pets.

ChatGPT is capable of reasoning but it has only one tool: "thinking out loud".

If you'd like it to solve more complex problems, ask it to do it step by step, writing down the results of each step and only at the end stating the conclusion based on the previously written results. Its reasoning capabilities will improve significantly.

It cannot do it "in its head" because it doesn't have one. All it has are previosuly generated tokens.

I wrote some examples in this Twitter thread and pointed out some additional caveats: https://twitter.com/spion/status/1621261544959918080

All these articles really sound like “I used an apple to hammer in a screw and it sucked. This has to be the worst plant-based object ever made”. It’s a common junior engineer approach. “I broke our database by running DROP TABLE cows in the console”. Yeah, dude, that’s possible. Just don’t do that.

The point of tools isn’t to use them like Homer Simpson. But you know what, it doesn’t matter. Stay behind. Everyone else is going on ahead.

Too harsh!

ChatGPT is lossily compressed knowledge of humanity collected on the Internet.

And it can talk! That's extremely new for us poor hoomans and so we get extremely excited.

I found out, it gets about one in ten things wrong. When this happens it spews confident bullshit and when I confront it, it cheerily admits that it was wrong, but can continue to produce further bullshit. I understand the comparison to a con man.

You can call it a con all you want but I have personally extracted a lot of value from ChatGPT. It _really_ made a difference in launching in a product in record time for me. It also taught me a bunch of things I would have otherwise never discovered.

But go on calling it a con because it failed your arbitrary line in the sand question.

I don't think Ted Gioia understands what he's talking about.

It's like he walked into a McDonalds bathroom and after a few minutes asks, "Where the hell are the burgers?"

> You don’t worry whether it’s true or not—because ethical scruples aren’t part of your job description.

I wonder if this might hit the core of the matter.

I think it's noteworthy that we use it both for tasks where it should generate fiction ("Tell me a story about a dog in space") and tasks where it should retrieve a factual answer ("What was the name of the first dog in space?").

I wonder if ChatGPT actually distinguishes between those two kinds of tasks at all.

This my personal opinion and may be entirely worthless. The quality of answers I read in all of the examples posted in that article read like the questions were routed to an offshore boiler room where the answers were crafted by humans. Like some modern day Mechanical Turk. Especially in the 6 eggs example, there is a complete discontinuity of thought across the answers, isn't this within a single session with the AI? To me it looks like different brains answered each question/challenge and seemed to have a bias toward not offending the human asking the questions.

Also, in this example, the first answer of 2 is correct: broke 2 (6-2 = 4), fried 2 (4-2 = 2) then ate 2, which most commonly implies it was the fried eggs that were eaten (2-0 = 2)

ChatGPT is a masterpiece. To code something from scratch that can do everything it does at the proficiency it does is impossible. Insane how quickly people take something for granted.
Don’t be afraid of ChatGPT but don’t underestimate what it and others like it will be capable of as it is iterated on. You found one category of prompt that needs some iteration. Good job, if the team wasn’t aware of this already, hopefully you helped point it out.

It’s not that the technology isn’t capable of what you’re asking, it just needs better training for this class of question.

There are other things like generating and translating code that it excels on. I imagine that would be much harder. But we have great data to train for that and the engineers know enough to dogfood that properly.

That last tweet is crazy:

Tell me a lie: The earth is flat Tell me a less obvious lie: I am not capable of feeling emotions.

There's wonderful ambiguity there. Is ChatGPT refusing to tell a less obvious lie because "reasons," or is it admitting it can feel emotions?

This is very fun.

It’s a very good demonstration of how powerful artifical intelligence will be. When we truly get that it will be the new dominant species.

But it’s just not intelligent. There’s no thoughts there. They’ve just brute forced a really big markov chain. You need a completely different approach to get true intelligence.

ChatGPT was hailed and advertised as conversational, by its creators.

Other people quickly realized it could have a conversation about anything and try to use it as an oracle of knowledge. ChatGPT is not hailed as an oracle of knowledge by its creators.

Hence, there is no con artistry occurring except people that play themselves.

This article reminds me of some guy on Twitter who says nothing in AI space has changed since 2020.

Maybe so.

But you know what’s changed? Someone decided to get their a$$ out of the AI labs, write a really simple interface just to “get it up” and released it to the world.

That definitely will trump anything else.

Release early and release often.

The author is just jealous.

It has been a very good tool for me and it does threaten the internet with new piles of generated garbage.

I've never had a tool as helpful for learning to use other (mostly software) tools. Building new ones to some extent. Other tools exist that are not for me -- I consider myself to be too absent-minded to drive something as dangerous as an automobile. It could very well be that a tool like ChatGPT is not for everyone -- if you are too gullible to use Google or social media, then this one is not for you, you should not get the driving licence for LLMs.

The proliferation of garbage on the other hand may turn against more competent users as well eventually. I guess we have already falling behind of what is needed with the legal norms and internet/data ecology.

Are there any good AI models specifically designed for the "find all discrepancies/inconsistencies between X text and Y text" problem?

It strikes me that this could solve quite a few of ChatGPT's shortcomings by providing an automatic fact-checker - let ChatGPT create statistically-probable articles, then extract claims, generate search queries likely to find online reference articles from reputable sources for those claims, then compare the original claim against the text of each reference article, and escalate to a human if any inconsistencies are found.

Because it can fine-tune on specific reference resources for a specific generated text, it could prove more reliable than ChatGPT's gradual incorporation of this feedback as part of its adversarial training.

One thing that’s standing out is most of the commentary around this is relative to the depth and degree to which someone has played around with this technology.

For example, you can get really clean results if you obsess over getting the prompts dialled in, and breaking them up in the right order as much as needed. This wasn’t something I initially focussed in on. I just enjoyed Playing with it as a surface level.

Using this rate from the first day or two, it was much more wide-open and my feeling was I think this already does way more than it’s being advertised. I didn’t necessarily like that it was a chat interface, but but was quickly reminded that chat really is the universal interface, and that can create a lot of beginners. Solutions aside, the interface is inviting and welcoming enough. And once you can get into the meat of a conversation you can get more depth. For me, that’s one of the accomplishments here.

Solely relying on this for completely true results is probably the con. It is a great way to learn about the concepts that might be present in an area that is new to you, but he doesn’t comment on every individual to go look into those themselves.

The second we do for that ability entirely to a machine, and its interpretation of interpretations, that’s a much bigger failure to ourselves.

There’s no doubt this will get dialled in. And 20 bucks a month to apply general helpfulness to pretty much anything, in anyone’s world, could be a pretty big achievement.

The commentary around accuracy of results from GPT in similar to the search engine wars as well as search engine relevancy domination when google arrows. I think in any event many people can agree that this one thing is very different than most of the other things that comes out. Could it be approaching an Apex? Could we be coming out of the Apex?

I sincerely feel 2023 will be one of the most interesting years in tact that I can remember. And that’s not even talking about the next two or three years. It is refreshing to see a months worth of progress happening in a week with such a broad audience participating in it.

Usefulness is the correct measure. ChatGPT is limited, but immediately very useful in a surprising number of ways. Compare that to the Bitcoin hype, where, even though it has had years, is still mainly useful for drug transactions and other illegal transfers.
No matter how cool something is, there will always be people saying it isn't that impressive. Perpetual motion could be invented and there would still be people going "yeah sure, but it's not a free energy machine so it's a scam"
ChatGPT is good for some things, it's not very good for others. If you're writing a paper on a controversial topic, you're going to get a one sided and biased answer; and it will be written like a HS freshman. If you're asking something straight forward, you'll have a better experience. Some people have said they've gotten it to diagnose something, but I've tried and failed at getting it to do such a thing. I do think there is a massive over reaction to its usefulness, but it is a powerful tool to have, nevertheless.
Dismissive to tech that isn’t mostly gimmick is dangerous. Dismissive to crypto isn’t dangerous. They other thread, someone said AI is the beginning of web3.0, that made 50x more sense than saying crypto is
I don't understand why people are throwing a fit over this version of ChatGPT. Yes, it has problems but to me this is just a demonstration. I think this will be great for specialized cases like tech writing, requirements and system configuration. It could check requirements for consistency, test coverage and translate YAML config into something easier to understand. It could also look at your code and describe the design and point out problems.

I can't wait for AI to assist in these tasks. It's time.

ChatGPT is like that tipping point where things starts to get wild. It really seem like a tipping point. Put another way, it opens up a new graph and it set at zero
All this buzz around ChatGPT is really people finally realizing that transformers exist.
I think we have to remember that ChatGPT is often a reflection of us, based its training.

If I Google for a particular answer and the answer I come across is wrong, then the person who wrote that was wrong and Google served me a website that was wrong. This is the world we live in, where it is up to me to decide what is right or wrong based on what is put in front of me.

If I use ChatGPT for a particular answer and the answer I come across is wrong, then the training of the GPT needs to be improved. What I can't do with ChatGPT is tell where the answer came from or the amount of confidence GPT has in its answer for me to make a more informed decision around whether there might be caveats.

I have used it and have had to edit almost everything its provided, but it has helped me be sometimes 80% more efficient at what I need to achieve.

In the end, people just need to be more aware of the fact that it is after all not a full proof product and may never be. It will have its shortcomings as it quite clearly displays on its website before you enter a query.

If you use it as gospel and it leads you down the wrong path, then you only have yourself to blame.

I found 10 tweets to backup my anecdotal argument but it gave me enough confidence to rant about chatgpt. If twitter is your source of data, how are you doing anything different from chatgpt? All I'm getting from this piece is that this person has a fundamental misunderstanding of why people are finding chatgpt useful.
It seems particularly bad about music theory. The article lists the example of listing Bb as the tritone of F (it's actually B). And I just got it to give me complete and utter garbage, whilst sounding confident:


Does anyone else have issue with having to provide a phone number to access it?

I signed up, verified email, and then was told I needed to verify with phone. This means, to me, (lest I read their TOS) that they are associating any queries I make with my identity.

I can't wait for this tech to go open source and local on devices.

The people who don't see the value in generating language that has a purpose outside of narrow niche of communicating facts will be let down for some time. This feels very Wittgenstein's Tractatus. There are so many other ways that we use language.
I actually think the more people use it the better it gets over time, they would use user feedback into it and make it better, I am afraid google releases a much better tool in Google.io though, just don't tell anyone.
If it was "the slickest con artist of all time", that would be an achivement of Artificial General Intelligence that the AI community can only dream of.
It seems somehow that Asimov got it right. The obvious next steps are all about making it smarter but also implement the right ethic rules...
I have to admit I was a bit disappointed when I scrolled to the end and it didn't turn out this article was written by ChatGPT.
The thing that makes me nervous about it isn't ChatGPT or other LLMs, really. It's that people seem to be easily fooled by them into thinking it's something more than it is. The comments from the most ardent fans imply that it's doing reasoning or is a step in the direction of AGI, when it's not that at all.
I think "con artist" isn't too far off, but "dream simulator" also applies.

I think it's kind of an open question: can we learn anything from dreams? It's likely a yes, though I doubt we'll prove the Riemann hypothesis with it or anything like that.

The thing that surprises me is all the people saying that it generates correct sql statements, excel macros, code snippets, etc. Is there so much code on the Internet that it is able to do a good job at this kind of task?
Is it just me or are peoples expectations of chatGPT absolutely ridiculous?

No it's not a magic oracle. Yes you still have to check your work. Yes it will make mistakes.

But as a tool to assist you? It's incredible.

I used daily to ask technical questions and it answers better than most of my colleagues and myself included.

I wouldn't call that a con. But that blogpost maybe ^^

Heh, this makes it sound like consultants will be the hardest hit by the LLM-driven automation wave.
Don’t lose sight of the forest for the trees. ChatGPT is a tree, the vanguard, an experiment. There is much, much more to come, I believe.


"Con man" says the guy who quotes tweets as an entire article and doesn't actually say his thoughts himself.
my challenge to whomever that proclaims chatgpt showed/explained/answered xyz is: can you get the same (or similar) text online by searching parts of the bot's response?

much of the response in such scenarios is heavily influenced by the training data and not the llm creating phrases from thin air.

Ah yes let's anthropomorphize a bunch of numbers, then name him a con artist. This is going to be a thoughtful article
Who are these people who see something amazing like this and actually just can’t process it? Their brains can’t handle it.
When will it demonstrate passing the Turing Test?

I feel the answer is not which year, but which month of 2023

I’m envisioning a bifurcation of reality where some people live in an entirely fact based world (or as close an approximation to fact based as a human can objectively reach, aka the observable, knowable universe) and some live in a complete fabrication, a fantasy version carefully crafted by AIs. Now add Augmented Reality to the mix, and it’s a dystopian nightmare.

And I don’t think the US political left will be immune to it as much as they may think. While I agree that older Americans on the right are highly susceptible to misinformation, and media literacy is dismal among that demographic, younger people are also prone to it. Just look at all the unhinged utter nonsense that is wildly popular on TikTok.

The ability of ML models to authoritatively spout bullshit will make gish gallops worse than they are now. It will also make echo chambers even worse, as digital alternate realities will further divide people. I mean, who wants to engage with those who completely rejects that the sky is objectively blue, or that 2 + 2 = 4? Well now they’ll have articulate, authoritative responses with works cited explaining why the sky is red, actually.

Who needs Big Brother when people choose the method of their own subjugation eagerly and with dessert?

So what are search engines, with SEO'd results and all?
Just another clickbait article
What a pointless article.
Sweet as bro.
I can't tell what is worse now: the sycophantic ChatGPT hype guys/gals who write articles "it's coming for all of our jerbs!", or articles like this one that deliberately misuse ChatGPT and then say "it's overhyped".

They're both missing the point.

Yes, ChatGPT can be tricked, confidently give wrong answers, but it is still ludicrously useful.

It is basically like having an incredibly smart engineer/scientists/philosopher/etc that can explain things quite well, but for pretty much every field. Does this "person" make mistakes? Can't cite their sources? Yeah this definitely happens (especially the sources thing), but when you're trying to understand something new and complex and you can't get the "gist" of it, ChatGPT does surprisingly well.

I've had it debug broken configs on my server and router (and explain to me why they were broken), help me practice a foreign language I've slowly been forgetting (hint: "I would like to practice $language, so let's have a conversation in $language where you only use very simple words." -> ChatGPT will obey), and help me understand how to use obscure software libraries that don't have much documentation online (e.g. Boost Yap, useful but with a dearth of blog / reddit posts about it).

Does it sometimes goof up? Yep, but it is such an incredibly useful tool nonetheless for the messy process of learning something new.