But 22301 isn’t prime? It’s 29*769.
- 1 Post
- 11 Comments
Audalin@lemmy.worldto
3DPrinting@lemmy.world•What are the most useful things you've printed?English
10·3 months agoA piece of plastic broke off from my laptop once. It was supposed to hold one of the two screws fixing the cover of the RAM & drive section and now there was just a larger round hole. I’ve measured the hole and the screw, designed a replacement in Blender (not identical, I wanted something more solid and reliable) and printed it; took two attempts to get the shape perfectly right. Have had zero issues with it in all these years.
Thanks! I now see that Tai Chi is mentioned frequently online in context of the film unlike yoga so that should be right; it narrows things down.
Audalin@lemmy.worldto
Books@lemmy.world•Hey all! anyone know a good free ereader that has accessibility functions?English
1·6 months agoKOReader supports custom CSS. You can certainly change the background colour with it, I think a grid should be possible too.
Audalin@lemmy.worldto
LocalLLaMA@sh.itjust.works•[April 2025] Which model are you using?English
2·7 months agoThat’s the ones, the 0414 release.
Audalin@lemmy.worldto
LocalLLaMA@sh.itjust.works•[April 2025] Which model are you using?English
5·7 months agoQWQ-32B for most questions, llama-3.1-8B for agents. I’m looking for new models to replace them though, especially the agent one.
Want to test the new GLM models, but I’d rather wait for llama.cpp to definitely fix the bugs with them first.
Audalin@lemmy.worldto
LocalLLaMA@sh.itjust.works•Anyone found "optimal" settings for llama.cpp partial offload?English
4·9 months agoWhat I’ve ultimately converged to without any rigorous testing is:
- using Q6 if it fits in VRAM+RAM (anything higher is a waste of memory and compute for barely any gain), otherwise either some small quant (rarely) or ignoring the model altogether;
- not really using IQ quants - afair they depend on a dataset and I don’t want the model’s behaviour to be affected by some additional dataset;
- other than the Q6 thing, in any trade-offs between speed and quality I choose quality - my usage volumes are low and I’d better wait for a good result;
- I load as much as I can into VRAM, leaving 1-3GB for the system and context.
Audalin@lemmy.worldto
Free Open-Source Artificial Intelligence@lemmy.world•Local, privacy respecting, open source LLM that can transcribe phone calls and summarize the call?English
1·1 year agoHaven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
Audalin@lemmy.worldto
Enshittification@lemmy.world•The Internet is becoming genuinely unusable without an ad blocker
1·1 year agoDiscounting temporary tech issues, I haven’t browsed internet without an adblocker for a single day in my entire life. Nobody is entitled to abuse my attention; no guilt, no exceptions.



Should be doable with Termux:
termux-sms-listandtermux-sms-sendcommands);termux-sms-listreturns messages in JSON, which is easy enough to handle with, say,jqin bash orjsonin python. The script itself can be a simple loop that fetches the latest messages every few minutes, filters for unprocessed ones from whitelisted numbers and callstermux-sms-send.Maybe it’d make sense to daemonise the script and launch it via
sv.But the Termux app weighs quite a bit itself.