Place where I write about stuff
Bits and bytes
LocalAI and llama.cpp on Jetson Nano Devkit
If you are a lucky(?) owner of the Jetson Nano Devkit (4GB), and you don’t know anymore on what to do with it, you can try to run LocalAI with llama.cpp on it.
The Jetson Nano Devkit is currently not supported anymore by Nvidia, and receives little to no attention, however, it can still do something, and if you are like me that recycles the board at home, you might want to have fun with it by running AI on top of it.
Create a Question answering bot for Slack on your data, that you can run locally
There has been a lot of buzz around AI, Langchain, and the possibilities they offer nowadays. In this blog post, I will delve into the process of creating a small assistant for yourself or your team on Slack. This assistant will be able to provide answers related to your documentation.
The problem I work at Spectro Cloud, and we have an exciting open source project called Kairos (check it out at https://kairos.