Last year I attended GitHub Universe 2024 virtually. The biggest thing that stood out to me was how much they were pushing Copilot and VS Code’s AI integrations towards their goal of 1 billions developers. The genie is out of the bottle, and to be a professional engineer going forward you need to know how to leverage AI technology for your every day work.

Attendees received a free 1 month trial to GitHub copilot. I didn’t have any large code bases to test it on, so I just made some basic desktop applications in python: a calculator, games (snake, pong, tetris), and a to-do list. It’d make a basic window and the app would start, but trying to do anything would fail. This still aligns with my overall assessment of LLMs; helpful at understanding an exiting problem for minor changes.

LLMs are resource heavy, so you either pay for the services or have good hardward. I have decent hardware (Ryzen 7 5700G, GeForce RTX 3060 TI, 32GB RAM) so I put it to use.

Ollama is a great local LLM that’s easy to set up. It’s containerized and easy to get running. The web interface is nice and very ChatGPT-esq. I tried CodeGPT a VSC code add-on that works as a locally powered GitHub copilot, but it was rough and not useful.

Me and my partner do like that ChatGPT has an interactive conversation mode, so we subscribed to the basic $20 a month plan. The value is quite there with all the basic issues we run into. But being able to upload documents and pictures, along with unlimited conversation mode has helped with my job search and interview prep.