

Sony WH-1000XM4/5/6
I don’t have one of those, but they’re pretty popular as headphones with good ANC.
Jlab Epic Air Sport ANC
I do have those, though.
Sony WH-1000XM4/5/6
I don’t have one of those, but they’re pretty popular as headphones with good ANC.
Jlab Epic Air Sport ANC
I do have those, though.
In fairness, rural America probably didn’t entirely understand the implications of said vote.
As I’ve pointed out on here before, I feel like a lot of people in mostly-Republican-voting rural American are going to be even more disappointed when they discover agricultural subsidies ending, healthcare subsidies ending that disproportionately benefit poorer, rural areas, illegal immigrant agricultural workers that farms rely on becoming unavailable, counter-tariffs that tend to target agricultural output from rural areas, etc.
What did you think of the new aiming system? I’ve heard mixed things, but it sounded good to me (or at least way better than a flat percentage).
I don’t know what the internal mechanics are like, haven’t read material about it. From a user standpoint, I have just a list of positive and negative factors impacting my hit chance, so less information about my hit chance. I guess I’d vaguely prefer the percentage — I generally am not a huge fan of games that have the player rely on mechanics trying to hide the details of those mechanics — but it’s nice to know what inputs are present. It hasn’t been a huge factor to me one way or the other, honestly; I mean, I feel like I’ve got a solid-enough idea of roughly what the chances are.
even if it doesn’t hit the same highs as JA2, there hasn’t really been much else that comes close and a more modern coat of polish would be welcome.
Yeah, I don’t know of other things that have the strategic aspect. For the squad-based tactical turn-based combat, there are some options that I’ve liked playing in the past.
While Wasteland 2 and Wasteland 3 aren’t quite the same thing — they’re closer to Fallout 1 and 2, as Wasteland 1 was a major inspiration for them — the squad-based, turn-based tactical combat system is somewhat similar, and if you’re hunting for games that have that, you might also enjoy that.
I also played Silent Storm and enjoyed it, though it’s now pretty long in the tooth (well, so is Jagged Alliance 2…). Even more of a combat focus. Feels lower budget, slightly unfinished.
And there’s X-Com. I didn’t like the new ones, which are glitzy, lots of time spent doing dramatic animations and stuff, but maybe I should go back and give them another chance.
I strongly doubt that they’d render Steam not runnable on their distro.
I’m sorry, you are correct. The syntax and interface mirrors docker, and one can run ollama in Docker, so I’d thought that it was a thin wrapper around Docker, but I just went to check, and you are right — it’s not running in Docker by default. Sorry, folks! Guess now I’ve got one more thing to look into getting inside a container myself.
While I don’t think that llama.cpp is specifically a special risk, I think that running generative AI software in a container is probably a good idea. It’s a rapidly-moving field with a lot of people contributing a lot of code that very quickly gets run on a lot of systems by a lot of people. There’s been malware that’s shown up in extensions for (for example) ComfyUI. And the software really doesn’t need to poke around at outside data.
Also, because the software has to touch the GPU, it needs a certain amount of outside access. Containerizing that takes some extra effort.
https://old.reddit.com/r/comfyui/comments/1hjnf8s/psa_please_secure_your_comfyui_instance/
ComfyUI users has been hit time and time again with malware from custom nodes or their dependencies. If you’re just using the vanilla nodes, or nodes you’ve personally developed yourself or vet yourself every update, then you’re fine. But you’re probably using custom nodes. They’re the great thing about ComfyUI, but also its great security weakness.
Half a year ago the LLMVISION node was found to contain an info stealer. Just this month the ultralytics library, used in custom nodes like the Impact nodes, was compromised, and a cryptominer was shipped to thousands of users.
Granted, the developers have been doing their best to try to help all involved by spreading awareness of the malware and by setting up an automated scanner to inform users if they’ve been affected, but what’s better than knowing how to get rid of the malware is not getting the malware at all. ’
Why Containerization is a solution
So what can you do to secure ComfyUI, which has a main selling point of being able to use nodes with arbitrary code in them? I propose a band-aid solution that, I think, isn’t horribly difficult to implement that significantly reduces your attack surface for malicious nodes or their dependencies: containerization.
Ollama means sticking llama.cpp in a Docker container, and that is, I think, a positive thing.
If there were a close analog to ollama, like some software package that could take a given LLM model and run in podman or Docker or something, I think that that’d be great. But I think that putting the software in a container is probably a good move relative to running it uncontainerized.
PNG has terrible compression
It’s fine if you’re using it for what it’s intended for, which is images with flat color or an ordered dither.
It’s not great for compressing photographs, but then, that wasn’t what it was aimed at.
Similarly, JPEG isn’t great at storing flat-color lossless images, which is PNG’s forte.
Different tools for different jobs.
At least at one point, GIF89a (animated GIF) support was universal among browsers, whereas animated PNG support was patchy. Could have changed.
I’ve also seen “GIF” files served up online that are actually, internally, animated PNG files, so some may actually be animated PNGs. No idea why people do that.
On the “better compression” front, I’d also add that I doubt that either PNG or WebP represent the pinnacle of image compression. IIRC from some years back, the best known general-purpose lossless compressors are neural-net based, and not fast.
kagis
https://fahaihi.github.io/NNLCB/
These guys apparently ran a number of tests. They had a neural-net-based compressor named “NNCP” get their best compression ratio, beating out the also-neural-net-based PAC, which was the compressor I think I recall.
The compression time for either was far longer than for traditional non-neural-net compressors like LZMA, with NNCP taking about 12 times as long as PAC and PAC taking about 127 times as long as LZMA.
What’s next?
I know you all immediately wondered, better compression?. We’re already working on that. And parallel encoding/decoding, too! Just like this update, we want to make sure we do it right.
We expect the next PNG update (Fourth Edition) to be short. It will improve HDR & Standard Dynamic Range (SDR) interoperability. While we work on that, we’ll be researching compression updates for PNG Fifth Edition.
One thing I’d like to see from image formats and libraries is better support for very high resolution images. Like, images where you’re zooming into and out of a very large, high-resolution image and probably only looking at a small part of the image at any given point.
I was playing around with some high resolution images a bit back, and I was quite surprised to find how poor the situation is. Try viewing a very high resolution PNG in your favorite image-viewing program, and it’ll probably choke.
At least on Linux, it looks like the standard native image viewers don’t do a great job here, and as best I can tell, the norm is to use web-based viewers. These deal with poor image format support support for high resolutions by generating versions of the image at multiple pre-scaled levels and then slicing the image into tiles, saving each tile as a separate image, so that a web browser just pulls down a handful of appropriate tiles from a web server. Viewers and library APIs need to be able to work with the image without having to decode the whole image.
gliv
used to do very smooth GPU-accelerated panning and zooming — I’d like to be able to do the same for very high-resolution images, decoding and loading visible data into video memory as required.
The only image format I could find that seemed to do reasonably well was pyramidal TIFF.
I would guess that better parallel encoding and decoding support is likely associated with solving this, since limiting the portion of the image that one needs to decode is probably necessary both for parallel decoding and for efficient high-resolution processing.
WebP had been kind of moving in on its turf, based on what I’ve been seeing websites using.
They have mechanical components that will wear out over time (though I suppose some people probably use them lightly enough that it’s less of an issue).
Just tried it, and it was some other game I was thinking of; I hadn’t played JA3 yet.
While I haven’t finished the game, thoughts:
It’s the strongest of the post-2 Jagged Alliance games that I’ve played.
Still not on par with JA2, at least relative to release year, I’d say also in absolute terms.
My biggest problem — I’m running this under Proton — is some bugginess that I’m a little suspicious is a thread deadlock. When it happens, I never see the targeting options show up when I target an enemy, and trying to go to the map or inventory screen doesn’t update the visible area onscreen, though I can blindly click and hear interactions. The game also doesn’t ever exit if I hit Alt-F4 in that state, just hangs. AFAICT, this can always be resolved by quicksaving (which you can do almost anywhere), stopping the game (I use kill
in a terminal on Linux) and reloading the save, but it’s definitely obnoxious. Fortunately, the game starts up pretty quickly. Nobody on ProtonDB talking about it, so maybe it’s just me. I have not noticed bugs other than this one.
So far, not much by way of missions where one has to figure out elaborate ways of getting into areas or the like: more of a combat focus. I have wirecutters, crowbars, lockpicks, and explosives, like in JA2, but thus far, it’s mostly just a matter of clicking on a locked container with someone who has lockpicking skill. Probably more realistic — in real life, an unattended door isn’t going to stop anyone for long — but I kinda miss that.
The maps feel a lot smaller to me, though the higher resolution might be part of that. A lot of 3d modeling to make them look pretty. There’s a lot more verticality, like watchtowers.
The game also feels considerably shorter than JA2, based on the percentage of the strategic map that I’ve taken. That being said, JA2 could get a bit repetitive when one is fighting the umpteenth enemy reinforcement party.
Unique perks for mercs that make them a lot more meaningful than in JA2 (though also limit your builds). For example, Fox can get what is basically a free turn if she initiates combat on a surprised enemy. Barry auto-constructs explosives each day.
Thematic feel of the mercs from JA2 is retained well.
Interesting perk tree.
A bunch of map modifiers like fog that have a major impact.
Bunch of QoL stuff for scheduling concurrent tasks for different mercs.
Pay demands don’t seem to rise with level, though other factors can drive it up (e.g. Fox will demand more pay if you hire Steroid).
Feels easier than JA2, though I haven’t finished it.
I’m pretty sure the keybindings are different.
Tiny thing, but I always liked the start of JA2, where your initial team does a fast-rope helicopter insertion into a hostile sector. Felt like a badass way to set the tone. No real analog in JA3.
I started running into guys with RPGs early on in JA3, much earlier than in JA2.
JA2 has ground vehicles and a helicopter and they require you to obtain fuel. Transport logistics don’t exist in JA3, other than paying to embark on boat trips at a port (and just checked online to confirm that they aren’t just in the late game).
More weapon mods in JA3. Looks like some interesting tradeoffs that one has to make here, rather than just “later-game stuff is better”.
For me, it was a worthwhile purchase — even with the irritating bug I keep hitting — and I would definitely recommend it over the other post-JA2 stuff if you’ve played JA2 and want more. It hasn’t left me giggling at the insane amount of complex interactions that were coded into the game like JA2 did, though, which were kind of a hallmark of the original.
From the article, I believe that it’s Steam Deck parts, not Steam Controller 1 parts.
Which makes sense, because you can get a Steam Deck, but the Steam Controller 1 has been out of production for some years.
EDIT: Wikipedia says that production ended in 2019.
Seems like it might be useful to have a per-site toggle.
I’m pretty sure that it defaults to best quality.
goes looking at man page
By default, yt-dlp tries to download the best available quality if you don't pass any options. This is generally
equivalent to using -f bestvideo*+bestaudio/best. However, if multiple audiostreams is enabled (--audio-multistreams),
the default format changes to -f bestvideo+bestaudio/best. Similarly, if ffmpeg is unavailable, or if you use yt-dlp
to stream to stdout (-o -), the default becomes -f best/bestvideo+bestaudio.
So I think that it should normally pull down the best audio unless you get into some situation where YouTube doesn’t offer a format that simultaneously has the combination of highest audio quality with the highest video quality; if it has to do so to get the highest video quality then, it’ll sacrifice audio quality.
EDIT: Hmm. I could have sworn that there was more text about prioritizing relative audio and video quality at one point in the man page, but I don’t see anything there now. Maybe it can just always get the best audio quality, regardless of video quality, can pull 'em entirely separately.
The title is a bit click-baity.
Steam had a setting where it would only run Proton on games on which it had been verified to work. Some people would inadvertently flip this setting off. Now the setting is gone, so they can’t accidentally do this.
If your phone is Android, NewPipe is an open-source, third-party client that permits setting quality. It’s on F-Droid (the big open-source app repository) if you use that, and probably on Google Store as well.
Thanks.
EDIT: There isn’t an --embed-auto-subs
, but there is a --write-auto-subs
.
I mean, there were legitimate technical issues with the standard, especially on smartphones, which is where they really got pushed out. Most other devices do have headphones jacks. If I get a laptop, it’s probably got a headphones jack. Radios will have headphones jacks. Get a mixer, it’s got a headphones jack. I don’t think that the standard is going to vanish anytime soon in general.
I like headphones jacks. I have a ton of 1/8" and 1/4" devices and headphones that I happily use. But they weren’t doing it for no reason.
From what I’ve read, the big, driving one that drove them out on smartphones was that the jack just takes up a lot more physical space in the phone than USB-C or Bluetooth. I’d rather just have a thicker phone, but a lot of people wouldn’t, and if you’re going all over the phone trying to figure out what to eject to buy more space, that’s gonna be a big target. For people who do want a jack on smartphones, which invariably have USB-C, you can get a similar effect to having a headphones jack by just leaving a small USB-C audio interface with a headphones jack on the end of your headphones (one with a passthrough USB-C port if you also want to use the USB-C port for charging).
A second issue was that the standard didn’t have a way to provide power (there was a now-dead extension from many years back, IIRC for MD players, that let a small amount of power be provided with an extra ring). That didn’t matter for a long time, as long as your device could put out a strong enough signal to drive headphones of whatever impedance you had. But ANC has started to become popular now, and you need power for ANC. This is really the first time I think that there’s a solid reason to want to power headphones.
The connection got shorted when plugging things in and out, which could result in loud sound on the membrane.
USB-C is designed so that the springy tensioning stuff that’s there to keep the connection solid is on the (cheap, easy to replace) cord rather than the (expensive, hard to replace) device; I understand from past reading that this was a major reason that micro-USB replaced mini-USB. Instead of your device wearing out, the cord wears out. Not as much of an issue for headphones as mini-USB, but I think that it’s probably fair to say that it’s desirable to have the tensioning on the cord side.
On USB-C, the right part breaks. One irritation I have with USB-C is that it is…kind of flimsy. Like, it doesn’t require that much force pushing on a plug sideways to damage a plug. However — and I don’t know if this was a design goal for USB-C, though I suspect it was — my experience has been that if that happens, it’s the plug on the (cheap, easy to replace) cord that gets damaged, not the device. I have a television with a headphones jack that I destroyed by tripping over a headphones cord once, because the headphones jack was nice and durable and let me tear components inside the television off. I’ve damaged several USB-C cables, but I’ve never damaged the device they’re connected to while doing so.
On an interesting note, the standard is extremely old, probably one of the oldest data standards in general use today; the 1/4" mono standard was from phone switchboards in the 1800s.
EDIT: Also, one other perk of using USB-C instead of a built-in headphones jack on a smartphone is that if the DAC on your phone sucks, going the USB-C-audio-interface route means that you can use a different DAC. Can’t really change the internal DAC. I don’t know about other people, but last phone I had that did have an audio jack would let through a “wub wub wub” sound when I was charging it on USB off my car’s 12V cigarette lighter adapter — dirty power, but USB power is often really dirty. Was really obnoxious when feeding my car’s stereo via its AUX port. That’s very much avoidable for the manufacturer by putting some filtering on the DAC’s power supply, maybe needs a capacitor on the thing, but the phone manufacturer didn’t do it, maybe to save space or money. That’s not something that I can go fix. I eventually worked around it by getting a battery-powered Bluetooth receiver that had a 1/8" headphones jack, cutting the phone’s DAC out of the equation. The phone’s internal DAC worked fine when the phone wasn’t charging, but I wanted to have the phone plugged in for (battery hungry) navigation stuff when I was driving.