Users points out in comments how the LLM recommends APT on Fedora which is clearly wrong. I can’t tell if OP is responding with LLM as well–it would be really embarrassing if so.
PS: Debian is really cool btw :)
Users points out in comments how the LLM recommends APT on Fedora which is clearly wrong. I can’t tell if OP is responding with LLM as well–it would be really embarrassing if so.
PS: Debian is really cool btw :)
I have been using gpt-oss:20b for helping me with bash scripts, so far it’s been pretty handy. But I make sure to know what I’m asking for and make sure I understand the output, so basically I might have been better off with 2010-ish Google and non-enshitified community resources.
Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.
It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.
Contrast that with Red Hat’s examples.
They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.
Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.
Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.
Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.