Open-source LLMs Now Match the Quality of GPT-4
![](https://cdn.arstechnica.net/wp-content/uploads/2024/03/robots_fighting_2-760x380.jpg)
![](https://cdn.sanity.io/images/4zrzovbb/website/746bdd369fb3b738519c34af8159692b60dd7a9c-1200x600.png)
People have been hacking together workflows to program alongside LLMs. Seems this trend and desire continues as more models get released.
![](https://www.devmandan.com/content/images/2024/02/naturallanguagelevel.png)
The biggest alpha in coding with AI right now is:
— Pietro Schirano (@skirano) April 1, 2024
Debug with GPT-4, Code with Claude 3.
GPT-4 is still king when it comes to logic, but it's extremely lazy.
Meanwhile, Claude would do anything you ask.
The duo together is unbeatable.
![](https://portal-website-cursor-6vgns9m3m-anysphere.vercel.app/banner.png)
I just setup @ollama with the Cody extension on VS code and it is awesome.
— Vlad (@deifosv) April 3, 2024
Here is how to do it.
✅Install Ollama and download the models you want to use.
✅Install Cody in your VS code
✅ Go to setting of the extension and enable "Cody experimental: Ollama chat
✅When starting… pic.twitter.com/69tsiB2vFp
![](https://www.devmandan.com/content/images/2024/04/Screenshot-2024-04-24-at-3.00.38-PM.png)
Mark Zuckerberg is confidently and continuously announcing new Llama models
![](https://www.devmandan.com/content/images/2024/04/GPUs.webp)
Me: Bearded Zuck you have to stop. Your smoked meat's too tough. Your swag too different. Your open source LLM is too bad. they'll kill you
— Mike Rundle (@flyosity) April 18, 2024
Bearded Zuck: pic.twitter.com/pdKwMUmpKS
Mark Zuckerberg announced LLAMA 3 that got released by Meta.
![](https://scontent.xx.fbcdn.net/v/t39.2365-6/423162455_1781617662307455_5553649501199931039_n.jpg?_nc_cat=111&ccb=1-7&_nc_sid=aa6a2f&_nc_ohc=3a91tB-1nXYQ7kNvgHxx1Dd&_nc_ht=scontent.xx&oh=00_AfBXLVNq1e0q_KFnMX-FH2xYwidX5XdLW7cArcJgNWjKLA&oe=662F46E6)
![](https://static.xx.fbcdn.net/rsrc.php/v3/yp/r/phLfVGLIQrd.png)
People have already been trying it out in different ways, but it is still early. It only just came out a week ago.
Llama 3 8B running 1.89 tokens/s on a Raspberry Pi 5 is pretty CRAZY pic.twitter.com/kK6bHfYu1p
— Adam C.H. (@adamcohenhillel) April 20, 2024
![](https://www.devmandan.com/content/images/2024/04/Screenshot-2024-04-24-at-3.22.52-PM.png)
Boom!
— Brian Roemmele (@BrianRoemmele) April 22, 2024
Open source LLaMA 3 (on Groq) vs. closed source GPT-4.
Prompt: code a snake game in Python.
There is no comparison.1
pic.twitter.com/uBwH6XqiWS