Last year I attended GitHub Universe 2024 virtually. The biggest thing that stood out to me was how much they were pushing Copilot and VS Code’s AI integrations towards their goal of 1 billions developers. The genie is out of the bottle, and to be a professional engineer going forward you need to know how to leverage AI technology for your every day work.

As part of attending I received a free 1 month trial to GitHub copilot myself. I didn’t have any large code bases to test it with, so I just had it make me some basic desktop applications in python: calculator, games (snake, pong, tetris), to-do list. Most of the time it’d make a basic window and the app would start, but trying to do anything would fail. This still aligns with my current conception, LLMs are good for helping understanding existing code bases for minor tweaks. But making something from scratch won’t happen till there’s another break through.

LLMs are resource heavy, so you either need to pay for the services or have good hardward. For myself, I’m somewhat cheap, and I always have the dream of a home lab, so I decided to put my quality hardware (AMD Ryzen 7 5700G, NVIDIA GeForce RTX 3060 TI, 32GB RAM) to use.

The easiest local LLM is Ollama. The biggest benefit is that it’s containerized so I found it easy to get up and running. The web interface is nice and very ChatGPT esq. I wanted to try it for development and found CodeGPT VSC code add-on that acts as a locally powered GitHub copilot. Getting the add-on to connect to Ollama was jankey, and overall not useful.

Me and my partner do like that ChatGPT has an interactive conversation mode, so we subscribed to the basic plan at $20 a month. I don’t think the value is quite there, especially with the basic issues we run into. But being able to upload documents and pictures, along with unlimited conversation mode really helps with my job search and interview prep.