<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Model-Training on Charlie Hulcher</title><link>https://charlie.engineer/tags/model-training/</link><description>Recent content in Model-Training on Charlie Hulcher</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><copyright> 2026 Charlie Hulcher</copyright><lastBuildDate>Sun, 05 Apr 2026 00:00:00 +0000</lastBuildDate><ttl>60</ttl><atom:link href="https://charlie.engineer/tags/model-training/index.xml" rel="self" type="application/rss+xml"/><item><title>Customizing an Open-Source Model Without an ML Team</title><link>https://charlie.engineer/posts/customizing-models-without-ml-team/</link><pubDate>Sun, 05 Apr 2026 00:00:00 +0000</pubDate><guid>https://charlie.engineer/posts/customizing-models-without-ml-team/</guid><description>&lt;p&gt;We all consume AI models through APIs. That works, but they don&amp;rsquo;t always behave how we want, and they&amp;rsquo;re often less efficient than they could be for the specific tasks we have.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve always assumed that doing something about that, actually customizing a model, required a specialized ML team and serious GPU infrastructure. Is that still true in 2026?&lt;/p&gt;
&lt;p&gt;I wanted to find out. When Google released Gemma 4, I set up a hands-on survey: how far can one engineer get with open-source tools, local hardware, and AI agents helping navigate the process?&lt;/p&gt;</description><category>ai,model-training,openclaw,open-source,agents</category></item></channel></rss>