<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Gunnar Wolf - Nice grey life</title>
    <description>Gunnar Wolf - Nice grey life</description>
    <language>en-US</language>
    <pubDate>Mon, 06 Apr 2026 08:04:25 -0700</pubDate>
    <lastBuildDate>Mon, 06 Apr 2026 08:04:25 -0700</lastBuildDate>
    <generator>Jekyll (own template)</generator>
    <link>https://gwolf.org</link>
    <atom:link rel="alternate" type="text/html" href="https://gwolf.org" />
    <atom:link href="https://gwolf.org/rss.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>As Answers Get Cheaper, Questions Grow Dearer</title>
      <pubDate>Sun, 08 Mar 2026 11:51:12 -0700</pubDate>
      <link>https://gwolf.org/2026/03/as-answers-get-cheaper-questions-grow-dearer.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2026/03/as-answers-get-cheaper-questions-grow-dearer.html</guid>
      <description>
	<![CDATA[
		 
		 <blockquote>
		 
		   This post is an <em>unpublished</em> review
		 
		     
		       
		         for <em><a href="https://cacm.acm.org/opinion/as-answers-get-cheaper-questions-grow-dearer/">As Answers Get Cheaper, Questions Grow Dearer</a></em>
		       
		     
		     
		   </blockquote>
		 
		 <p>This opinion article tackles the much discussed issues of Large Language
Models (LLMs) both endangering jobs and improving productivity.</p>

<p>The authors begin by making a comparison, likening the current
understanding of the effects LLMs are currently having upon
knowledge-intensive work to that of artists in the early XIX century, when
photography was first invented: they explain that photography didn’t result
in painting becoming obsolete, but undeniably changed in a fundamental
way. Realism was no longer the goal of painters, as they could no longer
compete in equal terms with photography. Painters then began experimenting
with the subjective experiences of color and light: Impressionism no longer
limits to copying reality, but adds elements of human feeling to creations.</p>

<p>The authors argue that LLMs make getting answers terribly cheap — not
necessarily correct, but immediate and plausible. In order for the use of
LLMs to be advantageous to users, a good working knowledge of the domain in
which LLMs are queried is key. They cite as LLMs increasing productivity on
average 14% at call centers, where questions have unambiguous answers and
the knowledge domain is limited, but causing prejudice close to 10% to
inexperience entrepreneurs following their advice in an environment where
understanding of the situation and critical judgment are key. The problem,
thus, becomes that LLMs are optimized to generate <em>plausible</em> answers. If
the user is not a domain expert, “plausibility becomes a stand-in for
truth”. They identify that, with this in mind, good questions become
strategic: Questions that continue a line of inquiry, that expand the
user’s field of awareness, that reveal where we must keep looking. They
liken this to Clayton Christensen’s 2010 text on consulting¹: A
consultant’s value is not in having all the answers, but in teaching
clients how to think.</p>

<p>LLMs are already, and will likely become more so as they improve,
game-changing for society. The authors argue that for much of the 20th
century, an individual’s success was measured by domain mastery, but bring
to the table that the defining factor is no longer knowledge accumulation,
but the ability to formulate the right questions. Of course, the authors
acknowledge (it’s even the literal title of one of the article’s sections)
that good questions need strong theoretical foundations. Knowing a specific
domain enables users to imagine what should happen if following a specific
lead, anticipate second-order effects, and evaluate whether plausible
answers are meaningful or misleading.</p>

<p>Shortly after I read the article I am reviewing, I came across a data point
that quite validates its claims: A short, informally published paper on
combinatorics and graph theory titled “Claude’s Cycles”² written by Donald
Knuth (one of the most respected Computer Science professors and
researchers and author of the very well known “The Art of Computer
Programming” series of books). Knuth’s text, and particularly its
“postscripts”, perfectly illustrate what the article of this review
conveys: LLMs can help a skillful researcher “connect the dots” in very
varied fields of knowledge, perform tiring and burdensome calculators, even
try mixing together some ideas that will fail — or succeed. But guided by a
true expert of the field, asking the right, insightful and informed
questions will the answers prove to be of value — and, in this case, of
immense value. Knuth writes of a particular piece of the solution, “I would
have found this solution myself if I’d taken time to look carefully at all
760 of the generalizable solutions for m=3”, but having an LLM perform all
the legwork it was surely a better use of his time.</p>

<p>¹ Christensen, C.M. <a href="https://www.truevaluemetrics.org/DBpdfs/Ideas/Christensen/How-will-you-measure-your-life.pdf">How Will You Measure Your
Life?</a>
Harvard Business Review Press (2017).</p>

<p>² Knuth, D. Claude’s Cycles. <a href="https://cs.stanford.edu/~knuth/papers/claude-cycles.pdf">https://cs.stanford.edu/~knuth/papers/claude-cycles.pdf</a></p>

	]]>
      </description>
    </item>
    
    <item>
      <title>Finally some light for those who care about Debian on the Raspberry Pi</title>
      <pubDate>Sat, 24 Jan 2026 08:24:22 -0800</pubDate>
      <link>https://gwolf.org/2026/01/finally-some-light-for-those-who-care-about-debian-on-the-raspberry-pi.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2026/01/finally-some-light-for-those-who-care-about-debian-on-the-raspberry-pi.html</guid>
      <description>
	<![CDATA[
		 
		 <p>Finally, some light at the end of the tunnel!</p>

<p>As I have said <a href="https://gwolf.org/2023/08/interested-in-adopting-the-rpi-images-for-debian.html">in this
blog</a>
and elsewhere, after putting quite a bit of work into generating the Debian
Raspberry Pi images between <a href="https://gwolf.org/2018/12/new-release-of-the-raspberry-pi-3-unofficial-debian-preview-image.html">late
2018</a>
and 2023, I had to recognize I don’t have the time and energy to properly
care for it.</p>

<p>I even registered a GSoC project for it. I mentored Kurva Prashanth, who
did good work on the <a href="https://gitlab.com/larswirzenius/vmdb2">vmdb2</a>
scripts we use for the image generation — but in the end, was unable to
push them to be built in Debian infrastructure. Maybe a different approach
was needed! While I adopted the images as they were conceived by <a href="https://people.debian.org/~stapelberg//2018/06/03/raspi3-looking-for-maintainer.html">Michael
Stapelberg</a>,
sometimes it’s easier to start from scratch and build a fresh approach.</p>

<p>So, I’m not yet pointing at a stable, proven release, but to a good
promise. And I hope I’m not being pushy by making this public: in the
#debian-raspberrypi channel, <code class="language-plaintext highlighter-rouge">waldi</code> has shared the images he has created
with the Debian Cloud Team’s infrastructure.</p>

<p>So, right now, the <a href="https://salsa.debian.org/waldi/debian-cloud-images/-/jobs?name=raspi&amp;kind=BUILD">images built so
far</a>
support Raspberry Pi families 4 and 5 (notably, <em>not</em> the 500 computer I
have, due to a missing <em>Device Tree</em>, but I’ll try to help figure that bit
out… Anyway, p400/500/500+ systems are not <em>that</em> usual). Work is
underway to get the 3B+ to boot (some hackery is needed, as it only
understands MBR partition schemes, so creating a hybrid image seems to be
needed).</p>

<p><img src="https://gwolf.org/files/2026-01/deb_raspi_cloud.png" alt="Debian Cloud images for Raspberries" /></p>

<p>Sadly, I don’t think the effort will be extended to cover older,
32-bit-only systems (RPi 0, 1 and 2).</p>

<p>Anyway, as this effort stabilizes, I will phase out my (stale!) work on
<code class="language-plaintext highlighter-rouge">raspi.debian.net</code>, and will redirect it to point at the new images.</p>

<h3 id="comments">Comments</h3>

<p><strong>Andrea Pappacoda <a href="mailto:tachi@d.o">tachi@d.o</a></strong> 2026-01-26 17:39:14 GMT+1</p>

<p>Are there any particular caveats compared to using the regular Raspberry Pi
OS?</p>

<p>Are they documented anywhere?</p>

<p><strong>Gunnar Wolf <a href="mailto:gwolf.blog@gwolf.org">gwolf.blog@gwolf.org</a></strong> 2026-01-26 11:02:29 GMT-6</p>

<p>Well, the Raspberry Pi OS includes quite a bit of software that’s not
packaged in Debian for various reasons — some of it because it’s non-free
demo-ware, some of it because it’s RPiOS-specific configuration, some of
it… I don’t care, I like running Debian wherever possible 😉</p>

<p><strong>Andrea Pappacoda <a href="mailto:tachi@d.o">tachi@d.o</a></strong> 2026-01-26 18:20:24 GMT+1</p>

<p>Thanks for the reply! Yeah, sorry, I should’ve been more specific. I also
just care about the Debian part. But: are there any hardware issues or
unsupported stuff, like booting from an SSD (which I’m currently doing)?</p>

<p><strong>Gunnar Wolf <a href="mailto:gwolf.blog@gwolf.org">gwolf.blog@gwolf.org</a></strong> 2026-01-26 12:16:29 GMT-6</p>

<p>That’s… beyond my knowledge 😉 Although I can tell you that:</p>

<ul>
  <li>
    <p>Raspberry Pi OS has hardware support as soon as their new boards hit the
market. The ability to even boot a board can take over a year for the
mainline Linux kernel (at least, it has, both in the cases of the 4 and
the 5 families).</p>
  </li>
  <li>
    <p>Also, sometimes some bits of hardware are not discovered by the Linux
kernels even if the general family boots because they are not declared in
the right place of the Device Tree (i.e. the wireless network interface
in the 02W is in a different address than in the 3B+, or the 500 does not
fully boot while the 5B now does). Usually it is a matter of “just”
declaring stuff in the right place, but it’s not a skill many of us have.</p>
  </li>
  <li>
    <p>Also, many RPi “hats” ship with their own Device Tree overlays, and they
cannot always be loaded on top of mainline kernels.</p>
  </li>
</ul>

<p><strong>Andrea Pappacoda <a href="mailto:tachi@d.o">tachi@d.o</a></strong> 2026-01-26 19:31:55 GMT+1</p>

<blockquote>
  <p>That’s… beyond my knowledge 😉 Although I can tell you that:</p>

  <p>Raspberry Pi OS has hardware support as soon as their new boards hit the
market. The ability to even boot a board can take over a year for the
mainline Linux kernel (at least, it has, both in the cases of the 4 and
the 5 families).</p>
</blockquote>

<p>Yeah, unfortunately I’m aware of that… I’ve also been trying to boot
OpenBSD on my rpi5 out of curiosity, but been blocked by my somewhat
unusual setup involving an NVMe SSD as the boot drive :/</p>

<blockquote>
  <p>Also, sometimes some bits of hardware are not discovered by the Linux
kernels even if the general family boots because they are not declared in
the right place of the Device Tree (i.e. the wireless network interface
in the 02W is in a different address than in the 3B+, or the 500 does not
fully boot while the 5B now does). Usually it is a matter of “just”
declaring stuff in the right place, but it’s not a skill many of us have.</p>
</blockquote>

<p>At some point in my life I had started reading a bit about device trees and
stuff, but got distracted by other stuff before I could develop any
familiarity with it. So I don’t have the skills either :)</p>

<blockquote>
  <p>Also, many RPi “hats” ship with their own Device Tree overlays, and they
cannot always be loaded on top of mainline kernels.</p>
</blockquote>

<p>I’m definitely not happy to hear this!</p>

<p>Guess I’ll have to try, and maybe report back once some page for these new
builds materializes.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>The Innovation Engine • Government-funded Academic Research</title>
      <pubDate>Tue, 13 Jan 2026 16:29:14 -0800</pubDate>
      <link>https://gwolf.org/2026/01/the-innovation-engine-government-funded-academic-research.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2026/01/the-innovation-engine-government-funded-academic-research.html</guid>
      <description>
	<![CDATA[
		 
		 <blockquote>
		 
		   This post is an <em>unpublished</em> review
		 
		     
		       
		         for <em><a href="https://dl.acm.org/doi/10.1145/3747176">The Innovation Engine • Government-funded Academic Research</a></em>
		       
		     
		     
		   </blockquote>
		 
		 <p>David Patterson does not need an introduction. Being the brain behind many
of the inventions that shaped the computing industry (repeatedly) over the
past 40 years, when he put forward an opinion article in <em>Communications of
the ACM</em> targeting the current day political waves in the USA, I could not
avoid choosing it to write this review.</p>

<p>Patterson worked for a a public university (University of California at
Berkeley) between 1976 and 2016, and in this article he argues how
government-funded academic research (GoFAR) allows for faster, more
effective and freer development than private sector-funded research would,
putting his own career milestones as an example of how public money that
went to his research has easily been amplified by a factor of 10,000:1 for
the country’s economy, and 1,000:1 particularly for the government.</p>

<p>Patterson illustrates this quoting five of the “home-run” research projects
he started and pursued with government funding, eventually spinning them
off as successful startups:</p>

<ul>
  <li><strong>RISC</strong> (Reduced Instruction Set Computing): Microprocessor
architecture that reduces the complexity and power consumption of CPUs,
yielding much smaller and more efficient processors.</li>
  <li><strong>RAID</strong> (Redundant Array of Inexpensive Disks): Patterson experimented
with a way to present a series of independent hard drive units as if they
were a single, larger one, leading to increases in capacity and
reliability beyond what the industry could provide in single drives, for
a fraction of the price.</li>
  <li><strong>NOW</strong> (Network Of Workstations): Introduced what we now know as
computer clusters (in contrast of large-scale massively multiprocessed
cache-coherent systems known as “supercomputers”), which nowadays power
over 80% of the Top500 supercomputer list and are the computer platform
of choice of practically all data centers.</li>
  <li><strong>RAD Lab</strong> (Reliable Adaptive Distributed Systems Lab): Pursued the
technology for data centers to be self-healing and self-managing, testing
and pushing early cloud-scalability limits</li>
  <li><strong>ParLab</strong> (Parallel Computing Lab): Given the development of massively
parallel processing inside even simple microprocessors, this lab explores
how to improve designs of parallel software and hardware, presenting the
ground works that proved that inherently parallel GPUs were better than
CPUs at machine learning tasks. It also developed the RISC-V open
instruction set architecture.</li>
</ul>

<p>Patterson identifies principles for the projects he has led, that are
specially compatible with the ways research works at universitary systems:
Multidisciplinary teams, demonstrative usable artifacts, seven- to ten-year
impact horizons, five-year sunset clauses (to create urgency and to lower
opportunity costs), physical proximity of collaborators, and leadership
followed on team success rather than individual recognition.</p>

<p>While it could be argued that it’s easy to point at Patterson’s work as a
success example while he is by far not the average academic, the points he
makes on how GoFAR research has been fundamental for the advance of
science and technology, but also of biology, medicine, and several other
fields are very clear.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>Python Workout 2nd edition</title>
      <pubDate>Mon, 12 Jan 2026 11:23:43 -0800</pubDate>
      <link>https://gwolf.org/2026/01/python-workout-2nd-edition.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2026/01/python-workout-2nd-edition.html</guid>
      <description>
	<![CDATA[
		 
		 <blockquote>
		 
		   This post is an <em>unpublished</em> review
		 
		     
		       
		         for <em><a href="https://www.manning.com/books/python-workout-second-edition">Python Workout 2nd edition</a></em>
		       
		     
		     
		   </blockquote>
		 
		 <p><strong>Note:</strong> While I often post the reviews I write for <em>Computing Reviews</em>,
this is a shorter review requested to me by Manning. They kindly invited me
several months ago to be a reviewer for <a href="https://www.manning.com/books/python-workout-second-edition">Python Workout, 2nd
edition</a>;
after giving them my opinions, I am happy to widely recommend this book to
interested readers.</p>

<p>Python is relatively an easy programming language to learn, allowing you to
start coding pretty quickly. However, there’s a significant gap between
being able to “throw code” in Python and truly mastering the language. To
write efficient, maintainable code that’s easy for others to understand,
practice is essential. And that’s often where many of us get stuck. This
book begins by stating that it <em>“is not designed to teach you Python (…)
but rather to improve your understanding of Python and how to use it to
solve problems.”</em></p>

<p>The author’s structure and writing style are very didactic. Each chapter
addresses a different aspect of the language: from the simplest (numbers,
strings, lists) to the most challenging for beginners (iterators and
generators), Lerner presents several problems for us to solve as examples,
emphasizing the less obvious details of each aspect.</p>

<p>I was invited as a reviewer in the preprint version of the book. I am now
very pleased to recommend it to all interested readers. The author presents
a pleasant and easy-to-read text, with a wealth of content that I am sure
will improve the Python skills of all its readers.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>Artificial Intelligence • Play or break the deck</title>
      <pubDate>Wed, 07 Jan 2026 11:46:37 -0800</pubDate>
      <link>https://gwolf.org/2026/01/artificial-intelligence-play-or-break-the-deck.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2026/01/artificial-intelligence-play-or-break-the-deck.html</guid>
      <description>
	<![CDATA[
		 
		 <blockquote>
		 
		   This post is a review for <a
		   href="https://www.computingreviews.com/">Computing
		   Reviews</a>
		 
		     
		       
		         for <em><a href="https://traficantes.net/libros/inteligencia-artificial-jugar-o-romper-la-baraja">Artificial Intelligence • Play or break the deck</a></em>
		       
		     
		     
		       , a book 
		       published in <em><a href="https://www.computingreviews.com/review/review_review.cfm?review_id=148076">Traficantes de Sueños</a></em>
		     
		   </blockquote>
		 
		 <p>As a little disclaimer, I usually review books or articles written in
English, and although I will offer this review to Computing Reviews as
usual, it is likely it will not be published. The title of this book in
Spanish is <strong>Inteligencia artificial: jugar o romper la baraja</strong>.</p>

<p>I was pointed at this book, published last October by Margarita Padilla
García, a well known Free Software activist from Spain who has long worked
on analyzing (and shaping) aspects of socio-technological change. As other
books published by <a href="https://traficantes.net/">Traficantes de sueños</a>, this
book is published as Open Access, under a CC BY-NC license, and <a href="https://traficantes.net/libros/inteligencia-artificial-jugar-o-romper-la-baraja">can be
downloaded in
full</a>. I
started casually looking at this book, with too long a backlog of material
to read, but soon realized I could just not put it down: it completely
captured me.</p>

<p>This book presents several aspects of Artificial Intelligence (AI), written
for a general, non-technical audience. Many books with a similar target
have been published, but this one is quite unique; first of all, it is
written in a personal, non-formal tone. Contrary to what’s usual in my
reading, the author made the explicit decision not to fill the book with
references to her sources (“because searching on Internet, it’s very easy
to find things”), making the book easier to read linearly — a decision I
somewhat regret, but recognize helps develop the author’s style.</p>

<p>The book has seven sections, dealing with different aspects of AI. They are
the “Visions” (historical framing of the development of AI); “Spectacular”
(why do we feel AI to be so disrupting, digging particularly into game
engines and search space); “Strategies”, explaining how multilayer neural
networks work and linking the various branches of historic AI together,
arriving at Natural Language Processing; “On the inside”, tackling
technical details such as algorithms, the importance of training data,
bias, discrimination; “On the outside”, presenting several example AI
implementations with socio-ethical implications; “Philosophy”, presenting
the works of Marx, Heidegger and Simondon in their relation with AI, work,
justice, ownership; and “Doing”, presenting aspects of social activism in
relation to AI. Each part ends with yet another personal note: Margarita
Padilla includes a letter to one of her friends related to said part.</p>

<p>Totalling 272 pages (A5, or roughly half-letter, format), this is a rather
small book. I read it probably over a week.  So, while this book does not
provide lots of new information to me, the way how it was written, made it
a very pleasing experience, and it will surely influence the way I
understand or explain several concepts in this domain.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>Unique security and privacy threats of large language models — a comprehensive survey</title>
      <pubDate>Mon, 15 Dec 2025 11:30:27 -0800</pubDate>
      <link>https://gwolf.org/2025/12/unique-security-and-privacy-threats-of-large-language-models-a-comprehensive-survey.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2025/12/unique-security-and-privacy-threats-of-large-language-models-a-comprehensive-survey.html</guid>
      <description>
	<![CDATA[
		 
		 <blockquote>
		 
		   This post is a review for <a
		   href="https://www.computingreviews.com/">Computing
		   Reviews</a>
		 
		     
		       
		         for <em><a href="https://dl.acm.org/doi/full/10.1145/3764113">Unique security and privacy threats of large language models — a comprehensive survey</a></em>
		       
		     
		     
		       , a article 
		       published in <em><a href="https://www.computingreviews.com/review/review_review.cfm?review_id=148061&listname=todaysissuearticle&">ACM Computing Surveys, Vol. 58, No. 4</a></em>
		     
		   </blockquote>
		 
		 <p>Much has been written about large language models (LLMs) being a risk to
user security and privacy, including the issue that, being trained with
datasets whose provenance and licensing are not always clear, they can be
tricked into producing bits of data that should not be divulgated. I took
on reading this article as means to gain a better understanding of this
area. The article completely fulfilled my expectations.</p>

<p>This is a review article, which is not a common format for me to follow:
instead of digging deep into a given topic, including an experiment or some
way of proofing the authors’ claims, a review article will contain a brief
explanation and taxonomy of the issues at hand, and a large number of
references covering the field. And, at 36 pages and 151 references, that’s
exactly what we get.</p>

<p>The article is roughly split in two parts: The first three sections present
the issue of security and privacy threats as seen by the authors, as well
as the taxonomy within which the review will be performed, and sections 4
through 7 cover the different moments in the life cycle of a LLM model (at
pre-training, during fine-tuning, when deploying systems that will interact
with end-users, and when deploying LLM-based agents), detailing their
relevant publications. For each of said moments, the authors first explore
the nature of the relevant risks, then present relevant attacks, and
finally close outlining countermeasures to said attacks.</p>

<p>The text is accompanied all throughout its development with tables,
pipeline diagrams and attack examples that visually guide the reader. While
the examples presented are sometimes a bit simplistic, they are a welcome
guide and aid to follow the explanations; the explanations for each of the
attack models are necessarily not very deep, and I was often left wondering
I correctly understood a given topic, or wanting to dig deeper – but being
this a review article, it is absolutely understandable.</p>

<p>The authors present an easy to read prose, and this article covers an
important spot in understanding this large, important, and emerging area of
LLM-related study.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>While it is cold-ish season in the North hemisphere...</title>
      <pubDate>Tue, 18 Nov 2025 19:59:47 -0800</pubDate>
      <link>https://gwolf.org/2025/11/while-it-is-cold-ish-season-in-the-north-hemisphere.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2025/11/while-it-is-cold-ish-season-in-the-north-hemisphere.html</guid>
      <description>
	<![CDATA[
		 
		 <p>Last week, our university held a «Mega Vaccination Center». Things cannot
be small or regular with my university, ever! According to the official
information, during last week ≈31,000 people were given a total of ≈74,000
vaccine dosis against influenza, COVID-19, pneumococcal disease and measles
(specific vaccines for each person selected according to an age profile).</p>

<p>I was a tiny blip in said numbers. One person, three shots. Took me three
hours, but am quite happy to have been among the huge crowd.</p>

<p><a href="https://gwolf.org/files/2025-11/cola_vacunacion.jpg"><img src="https://gwolf.org/files/2025-11/cola_vacunacion.400.jpg" alt="Long, long line" /></a></p>

<p>(↑ photo credit: <a href="https://www.jornada.com.mx/noticia/2025/11/14/politica/cancela-unam-jornada-de-vacunacion-por-agresion-a-su-personal">La Jornada, 2025.11.14</a>)</p>

<p><a href="https://gwolf.org/files/2025-11/vaccinated.jpg"><img src="https://gwolf.org/files/2025-11/vaccinated.400.jpg" alt="Really vaccinated!" /></a></p>

<p>And why am I bringing this up? Because I have long been involved in
organizing DebConf, the best conference ever, naturally devoted to
improving Debian GNU/Linux. And last year, our COVID reaction procedures
ended up hurting people we care about. We, as organizers, are taking it
seriously to shape a humane COVID handling policy that is, at the same
time, responsible and respectful for people who are (reasonably!) afraid to
catch the infection. No, COVID did not disappear in 2022, and its effects
are not something we can turn a blind eye to.</p>

<p>Next year, DebConf will take place in Santa Fe, Argentina, in July. This
means, it will be a Winter DebConf. And while you can catch COVID (or
influenza, or just a bad cold) at any time of year, odds are a bit higher.</p>

<p>I know not every country still administers free COVID or influenza vaccines
to anybody who requests them. And I know that any protection I might have
got now will be quite weaker by July. But I feel it necessary to ask of
everyone <strong>who can get it</strong> to get a shot. Most Northern Hemisphere
countries will have a vaccination campaign (or at least, higher vaccine
availability) before Winter.</p>

<p>If you plan to attend DebConf (hell… If you plan to attend <em>any massive
gathering of people travelling from all over the world to sit at a crowded
auditorium</em>) during the next year, please… Act responsibly. For yourself
and for those surrounding you. Get vaccinated. It won’t <em>absolutely</em> save
you from catching it, but it will reduce the probability. And if you do
catch it, you will probably have a much milder version. And thus, you will
spread it less during the first days until (and if!) you start developing
symptoms.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>404 not found</title>
      <pubDate>Fri, 14 Nov 2025 11:27:01 -0800</pubDate>
      <link>https://gwolf.org/2025/11/404-not-found.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2025/11/404-not-found.html</guid>
      <description>
	<![CDATA[
		 
		 <p>Found this grafitti on the wall behind my house today:</p>

<p><a href="https://gwolf.org/files/2025-11/404_not_found.jpeg"><img src="https://gwolf.org/files/2025-11/404_not_found.400.jpeg" alt="404 not found!" /></a></p>

	]]>
      </description>
    </item>
    
    <item>
      <title>LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation</title>
      <pubDate>Tue, 21 Oct 2025 15:08:14 -0700</pubDate>
      <link>https://gwolf.org/2025/10/llm-hallucinations-in-practical-code-generation-phenomena-mechanism-and-mitigation.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2025/10/llm-hallucinations-in-practical-code-generation-phenomena-mechanism-and-mitigation.html</guid>
      <description>
	<![CDATA[
		 
		 <blockquote>
		 
		   This post is a review for <a
		   href="https://www.computingreviews.com/">Computing
		   Reviews</a>
		 
		     
		       
		         for <em><a href="https://dl.acm.org/doi/abs/10.1145/3728894">LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation</a></em>
		       
		     
		     
		       , a article 
		       published in <em><a href="https://www.computingreviews.com/review/review_review.cfm?review_id=148040">Proceedings of the ACM on Software Engineering, Volume 2, Issue ISSTA</a></em>
		     
		   </blockquote>
		 
		 <p>How good can large language models (LLMs) be at generating code? This may
not seem like a very novel question, as several benchmarks (for example,
HumanEval and MBPP, published in 2021) existed before LLMs burst into
public view and started the current artificial intelligence (AI)
“inflation.” However, as the paper’s authors point out, code generation is
very seldom done as an isolated function, but instead must be deployed in a
coherent fashion together with the rest of the project or repository it is
meant to be integrated into. Today, several benchmarks (for example,
CoderEval or EvoCodeBench) measure the functional correctness of
LLM-generated code via test case pass rates.</p>

<p>This paper brings a new proposal to the table: comparing LLM-generated
repository-level evaluated code by examining the hallucinations
generated. The authors begin by running the Python code generation tasks
proposed in the CoderEval benchmark against six code-generating LLMs. Next,
they analyze the results and build a taxonomy to describe code-based LLM
hallucinations, with three types of conflicts (task requirement, factual
knowledge, and project context) as first-level categories and eight
subcategories within them. The authors then compare the results of each of
the LLMs per the main hallucination category. Finally, they try to find the
root cause for the hallucinations.</p>

<p>The paper is structured very clearly, not only presenting the three
research questions (RQ) but also referring to them as needed to explain why
and how each partial result is interpreted. RQ1 (establishing a
hallucination taxonomy) is the most thoroughly explored. While RQ2 (LLM
comparison) is clear, it just presents straightforward results without much
analysis. RQ3 (root cause discussion) is undoubtedly interesting, but I
feel it to be much more speculative and not directly related to the
analysis performed.</p>

<p>After tackling their research questions, Zhang et al. propose a possible
mitigation to counter the effect of hallucinations: enhance the LLM with
retrieval-augmented generation (RAG) so it better understands task
requirements, factual knowledge, and project context. The presented results
show that all of the models are clearly (though modestly) improved by the
proposed RAG-based mitigation.</p>

<p>The paper is clearly written and easy to read. It should provide its target
audience with interesting insights and discussions. I would have liked more
details on their RAG implementation, but I suppose that’s for a follow-up
work.</p>

	]]>
      </description>
    </item>
    
    <item>
      <title>Can a server be just too stable?</title>
      <pubDate>Tue, 14 Oct 2025 11:22:15 -0700</pubDate>
      <link>https://gwolf.org/2025/10/can-a-server-be-just-too-stable.html</link>
      
      <guid isPermaLink="true">https://gwolf.org/2025/10/can-a-server-be-just-too-stable.html</guid>
      <description>
	<![CDATA[
		 
		 <p>One of my servers at work leads a very light life: it is our main backups
server (so it has a I/O spike at night, with little CPU involvement) and
has some minor services running (i.e. a couple of Tor relays and my
personal email server — yes, I have the authorization for it 😉). It is a
very stable machine… But today I was surprised:</p>

<p>As I am about to migrate it to Debian 13 (<em>Trixie</em>), naturally, I am set to
reboot it. But before doing so:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ w
 12:21:54 up 1048 days, 0 min,  1 user,  load average: 0.22, 0.17, 0.17
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU  WHAT
gwolf             192.168.10.3     12:21           0.00s  0.02s sshd-session: gwolf [priv]
</code></pre></div></div>

<p><strong>Wow</strong>. Did I really last reboot this server on <strong>December 1 2022</strong>?</p>

<p>(Yes, I know this might speak bad of my security practices, as there are
several kernel updates I never applied, even having installed the relevant
packages. Still, it got me impressed 😉)</p>

<p>Debian. Rock solid.</p>

<p><a href="https://gwolf.org/files/2025-10/debian_rock_1.png"><img src="https://gwolf.org/files/2025-10/debian_rock_1.400.png" alt="Debian Rocks" /></a></p>

	]]>
      </description>
    </item>
    
  </channel>
</rss>
