Interesting ... It responded to a question with "I don't know".

Here's the full text in reverse-chronological order:


> By what measure is it popular? >> I don't know.

> Is neverthrow a popular npm library? >> Yes, neverthrow is a popular npm library. It is a fully-tested and type-checked npm package that provides type-safe error handling in TypeScript. It is designed to help users encode failability into the typesystem and provide a friendly API that lets them deal with failable values in a safe way.

> How do I install the neverthrow package? >> To install the neverthrow package, you can use the npm package manager. In your terminal, run the command `npm install neverthrow` and the package will be installed.

> What is typescript? >> TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. It offers classes, modules, and interfaces to help you build robust components. TypeScript is designed for development of large applications and transcompiles to JavaScript.

> What is this blog post about? >> This blog post is about type-safe error handling in TypeScript, and how to encode failability into the typesystem using the Result type. It explains how to use the neverthrow package to create a Result type, and how to use the map and mapErr methods to handle success and failure cases.

> Enter a URL to read: >> https://gdelgado.ca/type-safe-error-handling-in-typescript

Thanks, i uploaded the terms and conditions of some insurance I'm considering and am able to ask it questions without having read it all myself. (Obviously I'm double checking for accuracy in case of hallucinating but i suspect it won't be long before I have the confidence not to do so.)

The crazy thing is I fell asleep a few hours ago wondering how to feed the insurance terms and conditions to ChatGPT so that I could query them, and then I woke up and saw this.

All this AI progress opens up so many possibilities that it's almost anxiety-inducing

Very cool. I gave it https://www.gutenberg.org/cache/epub/3300/pg3300-images.html and it was able to pick up on some niche references in my questions. The text is quite long but it ingested it pretty quickly, which surprised me. Nice work. Does it cost you a lot to provide a service like this?
We built something similar but used a "studio" UI because we also can link up with Slack, an embeddable widget, etc.

We are using a scraper for HTML and parsing docs in Node.js.

The tough part for us is normalizing all of the data and working with various file types and encodings.

You can try it out for free here.


Do you have any tips on how to effectively parse website content? I tested it on one of my websites and it was able to answer questions based on content that was located in separate div/p containers. Do you divide content into different section and use embedding to find the relevant text, or do you feed the entire page content into the API?
Very cool I uploaded a project kick-off doc and it did a fantastic job of summarizing the key points. I'm really stoked about the ChatGPT-meets-all-your-docs tooling that will come to help employees onboard quicker without having to talk to another human!
Bing can do this (with GPT-4) but the problem is it has an unacknowledged limit to the amount it's able to read, and so it mishandles summarizing a large document, seeming to read the start and a few random pages. How does yours handle large sources?

Edit: I tried this one with an 800kb .txt and after digesting it, it got two answers wrong (but at least related to the text) and then started spitting out "I don't know". I asked "what is this document?" because I saw with my previous test that it can get blocked and be working with a 404 page ("this document is a page that says suspicious activity request denied") but this time it just said "I don't know."

OP here. We now have a Chrome extension[1] to post whichever webpage you're on to klavier.ai for subsequent Q&A. Avoids hassle of having to copy 'n paste the URL.

We are slowly working through the issues reported in this thread. Thanks for the kind and constructive feedback!

[1] https://chrome.google.com/webstore/detail/qa-with-klavier/jb...

Kagi has something similar for search results. You can pick a result and ask questions about it. Although, having played with both for a bit, the results from Klavier seem superior.
Seems quite similar to https://github.com/whitead/paper-qa with a few more document types added
Really amazing!

First question: are there other tools that do this with ChatGPT? carrolldunham says that Bing can do it. But still?

Second question: what are your plans for the future? Could we integrate your tool into a website, with a pre-made list of documents and web pages, and our members, or customers, can use the tool to get answers? And if I look further, could we imagine a similar tool to answer questions received by email?

Third question: the tool works with French documents, do you intend to translate the interface?

Thank you!

Awesome! Love it. How does it compare to chatpdf.com ?
Can we point it to a product documentation website -- with multiple child pages and sections and ask questions across that corups?

I am actually interested in creating such a chat dedicated to one product as a example / hobby project -- any pointers appreciated on where I can start, and what already implemented opensource solutions I can leverage for quick results. Thanks!

I plugged in https://www.wikiwand.com/en/Enzyme_inhibitor. Then asked it if computers can be used to identify if a compound is a enzyme inhibitor. What a surprise.
This looks promising. For me, it successfully scraped a webpage that Bing Chat would not. A couple comments: (a) preserve newlines in the output (I'm trying to get a CSV output), (b) the output length appears to be truncated - too short for my application.
This is fantastic and a great application of GPT. I uploaded the AWS Lambda developer guide in PDF format (1300ish pages, sorry!) and it's working wonderfully. I hope you keep your website around and expand on it!
React/SPA websites aren't read properly.

So I uploaded a desert recipe and noticed there is no formatting to the answer text. It is one big paragraph.

Other than that, the idea is great, but I got more "I don't know" than anything.

Do you call OpenAI's API or do you have a GPT model of your own?
Awesome product!

Is this a company? Or just a cool thing you hacked on the weekend? I ask it because there are tons of companies trying to do the same. How are you planning to monetize or standout?

This is a great idea! Do you plan to productize a self-hosted binary, possibly allowing other, locally-hosted LLMs to be used in place of OpenAI APIs?

There's definitely an enterprise use-case for this.

Have you evaluated the quality of the responses? In my experience retrieval augmentation is hit and miss, it might collate unrelated information from different passages.
> Sorry, our system encountered an issue. Please try again.
This is fantastic. In many ways I like it more than the traditional chatGPT. How are you able to get it to ingest such long documents?
This is very good. I gave it this HN discussion and asked it to list all links posted so far with description of each and it did.
Great. I uploaded a paper I was reading. You can easily summerise, and ask other questions.
Can you give some details about the concept of transforming a pdf into a GPT input?
I think this application of gpt is way more useful than the chat interface. (Add an option for users to pay sooner). Here are a few suggestions:

1) Allow it to take a search term, do a web search and allow the user to select from those results.

2) Allow it to look at more than one document.

3) Detect if the output contains math formulas/graphs and render them. (or allow me to write a javascript post processor so I can add that logic myself)

4) When a user question can't be answered, prompt the user to allow your system to web search and then include those documents.

5) Create a version that can be run locally for those of us with private data. You should charge a lot for that version (~$100+k if the customer provides the hardware, and $1m+ if you have to provide the hardware (blackbox)).

6) Detect research papers and read the citations. You may have to ask the user for a SSO key to get the citations from paywalled sites.

7) Abstract responses need to be made more concrete. See if you can train the model to provide an example or describe the purpose or intuition when it responds.

I wrote a wrong url on the site by mistake, and it doesn't work now
I chatted with it about a story I am writing. Inspiring!
Which version of GPT are you using?
how can I train a model on a bunch of pdf's and run it locally?
used it to interrogate langchains docs. how very meta