<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Autonomy on Thiago Avelino</title><link>https://avelino.run/tags/autonomy/</link><description>Recent content in Autonomy on Thiago Avelino</description><generator>Hugo</generator><language>en-us</language><copyright>© Avelino</copyright><lastBuildDate>Thu, 16 Apr 2026 10:01:08 -0300</lastBuildDate><atom:link href="https://avelino.run/tags/autonomy/index.xml" rel="self" type="application/rss+xml"/><item><title>The best PMs aren't more skilled. They're more free</title><link>https://avelino.run/best-pms-are-more-free/</link><pubDate>Wed, 15 Apr 2026 00:00:00 +0000</pubDate><guid>https://avelino.run/best-pms-are-more-free/</guid><description>&lt;p>In 2014, I had a decision on the table: migrate the video classification pipeline from CPU to GPU using CUDA.&lt;/p>
&lt;p>The problem was concrete. We processed video at scale - frame extraction with FFmpeg, visual features with HOG (Histogram of Oriented Gradients), classification with SVM. We tested shallow neural networks, but they didn't perform on our volume with the hardware of the time. HOG + SVM was the state of the art we could afford, and it worked - but it ran frame by frame on CPU, and that didn't scale.&lt;/p></description></item></channel></rss>