Prompt: create a rap song about a guy named Roman writing .NET APIs

Sure, here is a possible rap song about a guy named Roman writing .NET APIs:

  Verse 1:
  I'm Roman, and I'm a code king
  I write .NET APIs, and I do my thing
  I've got Visual Studio on my screen
  Writing code that's clean and lean

  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day

  Verse 2:
  I'm a master of C#, no doubt
  I write APIs that always work out
  I'm a whiz with classes and objects too
  I can handle any challenge that comes my way

  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day

  Verse 3:
  I'm a pro with ASP.NET and Web API
  I can build a server in a jiffy
  I know my way around REST and JSON
  I can make your data flow like a river

  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day

  Verse 4:
  I'm a rockstar with Azure and the cloud
  I can make your apps run smooth and proud
  I'm a wizard with microservices and containers
  I can help your business grow and prosper

  Roman, Roman, writing .NET APIs
  Roman, Roman, coding all day.

  Roman, Roman, the .NET API man
  Writing code that's always in demand
  Roman, Roman, the code hero
  Writing .NET APIs for the win.
ChatGPT blew my mind this morning — I was struggling with a regex, so asked it to help me.

Here's how it went:

A pretty incredible developer experience, if you ask me!

I've done the same thing, first thing I tried with ChatGPT:


- Write a simple CRDT implementation in Swift

- How can I make a GraphQL server using Rust using the async-graphql and actix-web libraries?

- Write code to create a Dropbox clone in Rust

- How do I make an HTTP request in Javascript?

However, just like with Copilot, it can introduce subtle bugs. For example, in the Dropbox example, notice how it uses synchronous code in an async function. Also, longer prompts will reach the token limit and get cut off.

Interestingly, this must be what Replit is also using for their AI codegen service too ( since I believe they have some connection to OpenAI, as they got early access to OpenAI's service last year with their previous Codex product (

I asked it a question about how to do something with dates in the .NET Entity Framework, and it give me a long explanation with code of how rowversion is used for optimistic concurrency control. Completely unrelated to what I was asking.

I did ask it something more basic: "how to use fetch to download and parse a csv", and the code was helpful, although not much different from what I could find with google/stackoverflow. However, this next part was what really impressed me. The results used Papa for parsing, so I then asked it to do it with csv-parse instead, and it worked! That seems like the advantage over something like Copilot--it has contextual history of the question which can be used conversationally to tweak the results.

This makes me wonder if the tech jobs or the future will fall into two camps:

1. those that can specify concise requirements for the AI

2. those that can check correctness of the code the AI generated.

I highlighted your prompt and clicked web search from the context menu and got this out of DuckDuckGo:

It's annoying blog spam that will hijack your attempt to scroll back with a modal asking you to enter an e-mail and subscribe, but it seems to be roughly the same thing, except the entire article appeared at once and I didn't need to interact with it and wait five minutes for the full answer.

I feel like I've mostly seen these AIs produce code on things that are mostly just copy-paste examples (login, fibonacci, primes). From a theoretical standpoint I'm only impressed if it's combining things to do achieve something that has no code example on the internet.

From a practical perspective I do think a lot of programmers are doing copy-paste in their day jobs, which means they should be reusing existing code (I would imagine there are numerous reusable customizable login forms).


For example I asked it this:

Can you write a javascript function that takes a callback and returns the value of that callback, but returns a cached value if that function has been called less than 5 seconds ago?

And it produces code that looks right but doesn't work (doesn't handle arguments, failing its own example implementation). However, upon reminding it to support arguments it does work. I guess this is actually potentially useful.

Since I found out about this last night I have already used it to do most of the work of three (relatively simple) programming tasks in bash and JavaScript. One of them was a simple class needed to prevent skipping/reusing numbers in some process that multiple users were accessing (and could sometimes fail). The other was adding up and deleting some temp files in my specific directory structure. And then there was a certain way that I wanted to add a duration to a date and format it.

For a few of them it made a small error of one line that I had to fix. But also in a few cases I was able to just reply that there was a bug and what it was doing incorrectly and to fix it, and it worked.

It gave me code that was very specific such as using the methods as I named and described them in the request for the class.

It really depends on what you want to do. I just told it to build a React-like framework in Prolog. It first told me that's not possible because Prolog can't be used to build GUIs (not really true either). I then told it that it could just output the DOM manipulations as text and it agreed. But said it's just a text model and can't write programs.
I tried asking it some programming questions -- I found if I asked it things that I was likely to find lots of copies of (or similar things), it did quite well. As soon as I asked it something a bit more unusual, even if the program was quite small, it broke.

One easy way of catching out is to ask "write X in language Y", where X isn't something most people would write in Y -- it usually got it wrong for me.

Just got kicked out of it an hour ago because it's too busy now.

Been playing with it the whole day - got an idea on how to fix an old bug in an old programme of mine(that is still running).

But, here's the thing, how good it is depends on the available knowledge out there.

I've now tried, js, python, solidity, appscript(not tested the code), Excel and google sheets.

When you're building common js, python, etc stuff, it's really handy to have one screen that basically gives you the answer rather than having to google it, parse through the posts, etc.

However, if you're on the bleeding edge of tech, or looking for more obscure topics, it really starts showing its limitations.

I managed to run into that for example looking for Jupiter notebook commands, optimisations, versioning (giving me python2 results instead of python3 with obsolete/unmaintained packages).

Its limit is basically what has been created and knowledge already available.

Depending on what you're building, would suggest understanding what you're copy pasting and possible vulnerabilities that may arise from the code.

Overall, I'm sold - defo wouldn't mind paying 20 to 40 a month for this. If it's pricier, not so sure.

This is the new threat vector. AI copy pasta. Gonna make pen test research way more fun. Just describe the website you’re at and read the source code.
Glory to Zalgo, it’s finally been solved:

It's decent for some results, but almost meaningless in others. I'd asked it to build a SwiftUI VideoPlayer application which is 5 lines of code. But I got an old solution, and despite giving proper guidance, it failed colossally and printed garbage towards the end.

Here's my convo with ChatGPT:

If the text and code are just random stuff making no sense then I wonder how different it is from the Copilot results (besides the convincing style of answers).

I wonder if this is lifted from a React tutorial, or if it is an amalgamation of other things.

The big question is: whose code is this? Is it, verbatim, someone else's copyrighted code (assuming a snippet like that is novel enough to be copyrightable)? Or is it derived from someone else's code? And if it's derived, is it a derivative work in the legal sense, or is it no more derivative than if a human had read a tutorial or some documentation and then later written code like it?

What happens when AI is learning from AI and no new content or advancement in a field is from people? Won't it be an eco chamber of recycled content?
Genuine question for everyone here: if you were switching careers from a non-technical profession to learning computer science, where would you focus? Do you have any confidence in which skills will be future-proof? Or is everything open to being automated and humans are just biding our time? :)
We just had conversation about this in the company. I think it's a good way to give us the code that we know how to write but we waste some time to do that. I.e. I needed function that changes the index of the array, so I asked chatGPT which gave me correct function. I know how to do that but I need some time to think and to test it out. This is just a augmentation of the programming, tools that help us write better code.

Maybe it can substitute StackOverflow, we will get better and faster code / explanation. But in the next 30-50 years, who knows. I guess it can write a lot of code for us, maybe we will be obsolete in that period of time, but as we know, prediction of the future often times goes really really wrong.

I am stopping all my contributions to github. I dont want myself to be part of unemployed
I had a whole discussion with it yesyerday about category theory as applied to asynchronous streams, and it was great. I think if it was a person we'd both have come out better programmers, but I guess it was just me.
ChatGPT has the IP problem with fast inverse square root again, like Copilot did:
It should be noted (and admitted in the blog post) that the high-level results from the new GPT-3 text-davinci-003 model and chatGPT should be similar, albeit without fancy syntax highlighting.

Well, mostly. I tried the prompt in the OP into text-davinci-003 and got this:

    const handleLogin = (e) => {
      // check if the email and password are valid
      // perform login
      console.log(email, password);
Although that was with temperature = 0.7. temperature = 0.0 gives more reasonable results.
I put this prompt in:

Write a javascript file using the Express framework in NodeJS containing CRUD routes for a TODO entity that contains a name, a description, and datapoint indicating whether the TODO is completed or not, and connect this entity to the Sequelize module from the npm package manager

And it spat out a full tutorial on how to write an express crud app hooked up to Sequelize with SQLLite with Sequelize types and all crud routes, it's pretty impressive

Uh? Is it me or this openai website is literally demanding a cellular phone number? Only have a landline number.

What is this? I just gave my email. Isn't this enough?.

I've been playing with it, asking it to create a rest api for example for some blockchain testnet etc.

My assumptions up until now is that it provides kind of a template result for many things, which you could have searched online. It's like a complement to a search engine.

Did the code work?
I would like to use this, but I find it weird that it requires a phone number to use.
This works wonders assuming all code moving forwards will only be written with tools it was trained against. I imagine it will have no clue how to get berate code for a fresh new framework.
Can it find errors though? I mean when people will start to rely too much on a computer brain, not using/training their own, they essentially become slaves to this system.
WITCH companies “consultants” would be avid users of this feature.
This is awesome. Would be amazing for a tutorial as well. I can code(defined loosely) in python, but seems like this would accelerate learning a new language significantly.
Can someone please try asking it to find proofs for unsolved mathematical conjectures?
Is this why Elon Musk is firing so many programmers at Twitter?
This seems like the ideal friend's developer, who can intelligently search google for nearly perfect answer.
I wonder if ChatGPT could solve a travelling salesman problem?
Does it really generate the text like that, little by little, or is that just a needless animation?