I built something like this too:
https://github.com/hitchdev/hitchstory/blob/master/examples%...
Just yesterday, I wrote an article about FT and learned about services like Entry Point AI.
Seems like an awesome idea. I'm curious how long it will take to get a model on a reasonable level.
Phind is pretty good and also the fastest model I used recently, so I'd assume it's quite small, no?
Post Author: getting a lot of requests so scaling the backend. Standby.
can you please add more info on the page to show why it is important and how its helpful
Shouldn't this be LoRA training?
Yeah I've been thinking about this lately.
LLMs come and go.
Prompt engineering techniques come and go.
But eval / labelled dataset is always useful once you built it.
“Learning” by prompting, calculating the loss against evals, and updating the prompt
Isn’t this just a very naive implementation of what DsPY does?
https://github.com/stanfordnlp/dspy
I don’t understand what is exceptional here.