I never did that, my connection was too slow to want to take up someone’s DCC slot for like a day to get an entire movie. Remember all the frustrating idiots who would share .lit files, but forget to remove the DRM from them?
Blind geek, fanfiction lover (Harry Potter and MLP). Mastodon at: @[email protected].
I never did that, my connection was too slow to want to take up someone’s DCC slot for like a day to get an entire movie. Remember all the frustrating idiots who would share .lit files, but forget to remove the DRM from them?
Ah, good to know. Back in my day, when we had to walk a hundred miles to school in the snow, up hill both ways, IRC was the only place to get ebooks. I’m guessing it’s just the old users clinging on now.
Man, I’m getting flashbacks to my days running omenserve on undernet. I had no idea people were still doing this! How does the content compare to places like Anna’s archive these days?
Prophecy approved companion is excellent! It gave me all the feels. It’s both extremely funny, and extremely poignant as the main character learns who she is, what’s really going on, and her intended roll in it all. It’s one of the few series where the reader knows exactly what’s happening from the start, but the fact the main character being slow to catch on isn’t frustrating.
I wonder how long they’ll be able to keep it free? GPT4 isn’t cheap.
Also, if you don’t feel comfortable building bookworm from source yourself, and you feel like you can trust me, Here’s a build of the latest bookworm code from github for 64-bit Windows: https://www.sendspace.com/pro/dl/rd388d
If you use Bookworm and use the built-in support for espeak, you can get up to 600 words per minute or so. Dectalk can go well over 900 words per minute. As far as I know, cocoa tops out at around 500 words per minute. So all of the options accept piper should be fine for you.
No, Mistral 7B can’t describe or work with images. Thanks for answering!
It really depends on your use case. If you want something that sounds pretty okay, and is decently fast, Piper fits the bill. However, this is just a command line TTS system; you’ll need to build all the supporting infrastructure if you want it to read audiobooks. https://github.com/rhasspy/piper
An extension for the free and open source NVDA screen reader to use piper lives here: https://github.com/mush42/piper-nvda
If you want something that can run in realtime, though sounds somewhat robotic, you want dectalk. This repo comes with libraries and dlls, as well as several sample applications. Note, however, that the licensing status of this code is…uh…dubious to say the least. Dectalk was abandonware for years, and the developer leaked the sourcecode on a mailing list in the 2000’s. However, ownership of the code was recently re-established, and Dectalk is now a commercial product once again. But the new owners haven’t come after the repo yet: https://github.com/dectalk/dectalk
If you want a robotic but realtime voice that’s fully FOSS with known licensing status, you want espeak-ng: https://github.com/espeak-ng/espeak-ng
If you want a fully fledged software application to read things to you, but don’t need a screen reader and don’t want to build scripts yourself, you want bookworm: https://github.com/blindpandas/bookworm
Note, however, that you should build bookworm from source. While the author accepts pull requests, because of his circumstances, he’s no longer able to build new releases: https://github.com/blindpandas/bookworm/discussions/224
If you are okay with using closed-source freeware, Balabolka is another way to go to get a full text to speech reader: https://www.cross-plus-a.com/balabolka.htm
Can Mistral describe images yet? Not sure if it’s multi-modal or not. If it could that would be a super useful feature for those of us over on rblind.com. And/or is the code available somewhere for us to hack in something like openrouter and spin up a copy?
Personally I find myself renting GPU and running Goliath 120b. Smaller models could do what I’m doing if I spent more time optimizing my prompts. But every day I’m doing different tasks, and Goliath 120b will just handle whatever I throw at it, no matter how sloppy I am. I’ve also been playing with LLAVA and Hermes vision models to describe images to me. However, when I really need alt-text for an image I can’t see, I still find myself resorting to GPT4; the open source options just aren’t as accurate or detailed.
Apparently! I don’t hide my data in any way, and constantly get ads in languages I don’t speak. Usually French, but sometimes Hindi or Chinese. And as a blind person myself, I’m not sure that my well paid full time job working in large enterprise and big tech accessibility is altruism deserving of thanks haha.
I assume it’s because I live in Canada, and big American data just assumes all Canadians speak French. I regularly get French ads on English websites.
I don’t block anything. I work in accessibility, so it’s important to me to know what the experiences are like for my fellow users with disabilities. I also don’t want to recommend sites or apps that are riddled with inaccessible ads. I’d rather not give them traffic at all. Though even though I let them track me, I still get ads in a language I don’t speak for cars I can’t drive. What’re they doing with all that data?
Good to know; thanks! I’ll keep an eye on it.
I was having issues with outgoing federation to Mastodon on 0.19.0. I just did the update five minutes ago, so we’ll see if that fixes it. If you’re seeing this comment I guess it’s working at the moment.
A couple reasons, I think:
AI dubbing: this makes it way easier for YouTube to add secondary dubbed tracks to videos in multiple languages. Based on the Google push to add AI into everything, including creating AI related OKR’s, that’s probably a primary driver. Multiple audio tracks is just needed infrastructure to add AI dubbing.
Audio description: Google is fighting enough antitrust related legal battles right now. The fact that YouTube doesn’t support audio description for those of us who are blind has been an issue for a long time, and now that basically every other video streaming service supports it, I suspect they’re starting to feel increased pressure to get on board. Once again, multiple audio tracks is needed infrastructure for offering audio description.
Surprised nobody has mentioned my two favourites:
Most of the other stuff I listen to is either industry specific or fandom/hobby specific.
I run the RBlind.com Lemmy instance at Accuris Hosting. Decent Virtual Machines, easy IPV6 support, and everything works fine. Prices are a bit on the high end, but it’s worth it to me to use a provider located in my country, where I understand all of the associated laws and can pay in my own currency via my local bank. Also, I’d rather not give money to big tech if I can help it, and support local business instead. This isn’t sponsored or anything, I’m just a mostly contented customer.
Also, of course, the fact that the control panel is screen-reader accessible is super important to me, though I doubt anyone else cares. But unfortunately that’s not yet the case with most of the larger cloud providers like AWS. And if they do deploy an inaccessible update, the company is small enough that I can send an email and get an answer from a human who has actually read what I wrote, rather than a corporate AI.
Problem was that I usually only discovered the issue when I went to read the book lol