There are total bypass options now to completely remove their hardware from your network using an ONT that lets you clone the att device serial number. Just a heads up.
There are total bypass options now to completely remove their hardware from your network using an ONT that lets you clone the att device serial number. Just a heads up.
You forgot to say unsubscribe
It’s just a feed posting stories from https://news.ycombinator.com/ which at times has good stuff and relatively insightful user discussions.
You aren’t giving us enough information to even speculate the answer. Are these Enterprise grade servers in a datacenter? Are these home made servers with consumer or low grade hardware you’re calling servers? Are they in the same datacenter or do they go out to the Internet? What exists between the hops on the network? Is the latency consistent? What is the quality of both sides of the connection? Fiber? Wi-Fi? Mobile? Satellite?
Does it drop too nothing or just settle into a constant slower speed? What have you tried to trouble shoot? Is it only rsync or do other tests between the hosts show the same behavior?
Give us more and you might get some help. If these hosts are Linux I would start with iperf to do a more scientific test. And report to us some more info.
Yeah the previous bypass used a certificate that you’d have to authenticate periodically via 802.1x. This new method does not have that requirement. Just need the specialized hardware for it, like that Azores d20 box or one of the SFP+ xgs-pon modules that you can program.
I’ve been using it without any intervention for a little over a 8 months now. Even have my /29 static IP block allocated on it, while still being able to also use the DHCP address they give out. You get to use the whole /29 too without the att box stealing one of them as well.
I think the originator of it was on dslreports but I couldn’t find the link on mobile. I’m sure if you can search on Google you could find a secondary source for some tech blog or medium about it if that makes you feel better. There’s also a discord that covers most xgs-pon bypass methods that I could share too. They keep turning it to private at times for whatever reason.
Other links and info of you are being serious and not passive aggressive. ATT is quick with DMCA takedowns so that’s probably why the info can be fleetingly available at times but dslreports seems to be pretty reliable/resistant to them.
https://www.dslreports.com/forum/r33665048-AT-T-Fiber-XGS-PON-SFP-Modules-for-AT-T-Fiber
https://hackaday.io/project/193110-bypassing-the-bgw-320-using-an-azores-cots-ont
https://forum.netgate.com/topic/99190/att-uverse-rg-bypass-0-2-btc/440
https://simeononsecurity.com/guides/bypassing-the-bgw320-att-fiber-modem-router/
You can totally bypass ATT Fiber now with your own SFP+ xgs-pon, fiber terminated to your device, without needing to exfil certs or do anything other than clone the identifying info of the att router’s label depending on the technology they’re using in your area.
An organization that opposes the Galactic Empire. The Alliance is a coalition of rebel cells and resistance groups that formed in response to the Empire’s authoritarian rule.
Probably not 30k. Maybe closer to 5-10k depending when he bought it and how. Second hand stuff can be real cheap, whereas brand new server gear costs an arm and a leg so it could be 30k+ if he bought most of it new. The real cost is the amount of time it has taken to configure it all and get it working well, which I can attest can be a boatload of time.
My setup is maybe a bit more overkill and I’ve probably spent closer to 30k, especially once you factor in storage, nearly all of the server equipment second hand. You know you’ve gone overboard when you’re talking about running 100gbit to more rooms, you have a categorization system for keeping the compatible sfp modules and cabling organized, and needing a second whole 42u rack with a couple in service servers on chairs waiting for said rack.
There is a storied history in computing to use tongue in cheek self referential acronyms to denote some humor and finality in distinguishing things that purposely fill a niche in the world of competing, often pricey, commercial software and other hackable reasons.
So I bet you’re rubbing wrong those of us who remember that gnu is not unix, and more specifically wine is not an emulator. Because they really aren’t.
I don’t believe this is possible and actively protected against in the dht protocol implementation.
The return value for a query for peers includes an opaque value known as the “token.” For a node to announce that its controlling peer is downloading a torrent, it must present the token received from the same queried node in a recent query for peers. When a node attempts to “announce” a torrent, the queried node checks the token against the querying node’s IP address. This is to prevent malicious hosts from signing up other hosts for torrents. Since the token is merely returned by the querying node to the same node it received the token from, the implementation is not defined. Tokens must be accepted for a reasonable amount of time after they have been distributed. The BitTorrent implementation uses the SHA1 hash of the IP address concatenated onto a secret that changes every five minutes and tokens up to ten minutes old are accepted.
I believe you would have to know the torrent first, then you could discover other nodes. This is probably why that tool can’t tell you anything outside of it’s known list of torrents.
Maybe I’m misunderstanding the purpose or goal but wouldn’t this be perfect use case for a virtual machine? I’m surprised no one has suggested that. A one off temporary, easily reverted back to pristine with snapshots sounds like exactly what you would want for testing something like this out.
I’m pretty sure I owe my career in computers to the high seas. Napster led to irc, which led to the endless rabbit hole of many a sleepless night in the chat rooms of the 90s.
Wasn’t 1999 the peak of the price gouging from the record labels? It was like $20-25 for a new album for a ton of the major record labels from what I remember.
It’s extremely common in Enterprise where costs for a 100k+ server isn’t the most expensive part of running, maintaining, servicing said server. If your home lab isn’t practicing 3-2-1 backups (at least three copies of your data, two local (on-site) but on different media/devices, and at least one copy off-site) yet, I’d spend money on that before ECC.
From the link:
@PriorProjectEnglish7
The answers in this thread are surprisingly complex, and though they contain true technical facts, their conclusions are generally wrong in terms of what it takes to maintain file integrity. The simple answer is that ECC ram in a networked file server can only protect against memory corruption in the filesystem, but memory corruption can also occur in application code and that’s enough to corrupt a file even if the file server faithfully records the broken bytestream produced by the app.
If you run a Postgres container, and the non-ecc DB process bitflips a key or value, the ECC networked filesystem will faithfully record that corrupted key or value. If the DB bitflips a critical metadata structure in the db file-format, the db file will get corrupted even though the ECC networked filesystem recorded those corrupt bits faithfully and even though the filesystem metadata is intact.
If you run a video transcoding container and it experiences bitflips, that can result in visual glitches or in the video metadata being invalid… again even if the networked filesystem records those corrupt bits faithfully and the filesystem metadata is fully intact.
ECC in the file server prevents complete filesystem loss due to corruption of key FS metadata structures (or at least memory bit-flips… but modern checksumming fs’s like ZFS protect against bit-flips in the storage pretty well). And it protects from individual file loss due to bitflips in the file server. It does NOT protect from the app container corrupting the stream of bytes written to an individual file, which is opaque to the filesystem but which is nonetheless structured data that can be corrupted by the app. If you want ECC-levels of integrity you need to run ECC at all points in the pipeline that are writing data.
That said, I’ve never run an ECC box in my homelab, have never knowingly experienced corruption due to bit flips, and have never knowingly had a file corruption that mattered despite storing and using many terabytes of data. If I care enough about integrity to care about ECC, I probably also care enough to run multiple pipelines on independent hardware and cross-check their results. It’s not something I would lose sleep over.
DDR5 has built in data checking which is ECC without the automatic correction which might be worthwhile depending on your setup.
Your ECC on the pi i believe isn’t for the memory chip but for the on chip die’s cache for ARM.
For me personally, if my racked server supports it, I get ECC. If it doesn’t, I don’t sweat it. Redundance in drives, power, and networking is much more important to me and are order of magnitudes higher chance of failing from my anecdotal experience. If I can save those dollars for another higher probably failure, I do that.
DNS is a lynchpin of my network (and wife approval factor) which I splurge a bit for with physical redundance of an identical mini computer that runs it and fail over to same ip if the first box fails. Those considerations are way before if the server has ECC. Just my $0.02.
Google lens gives:
hello i am the king and you are a subject and i have money like water and you are the tap and you have big worries and I have another million and I’m going to tell you that we do it together
Not sure either. Maybe they set the default app for handling the mailto: protocol to :(){ :|:& };:
or something to make life interesting?
I’m going on 25+ years and at principal eng/architect level. My take would be to find something, try it, and find if it excites you. There isn’t a wrong answer. At worst you’ll become a generalist, fluent with more and more until you find a niche in an array of things you’re conversant in. At best you’ll dive deep into a specific area and become more and more of an expert on a topic.
Right now I’m really into rust, rewriting tons I’ve done in the past with more experience under my belt, and learning more about web assembly. Running rust in web assembly on any platform including the user’s browser without really having to think about distribution targets is something that excites me. I think I can gleam a future that might compete with how revolutionary kubernetes has been, but even if I’m wrong the things I’ve learned will still hold up.
If the huge array of things overwhelms you, find a problem and try to solve it. Just the act of doing that and heading into that rabbit hole can open up new worlds you never even knew existed, and helps strengthen one of what I would consider the best qualities in good devs: competent independent troubleshooting. The fun I’ve had trying my hand at bypassing att router restrictions, extracting certificates from roms, architecting my home network with self hosted kubernetes and all the home automation stuff, low level c embedded systems programming for homemade iot sensors… The things you can do with tech is usually always in reach of anyone with some time and an Internet connection.
Also, don’t neglect the open source community. Start a project, contribute to someone else’s… Probably the biggest leap I took as a dev consisted of a simple change to a large oss project. The mentality, guardrails, rule self imposed on the project we’re incredibly impressive to me and I learned so much about the benefits of code quality, good review, automated, well everything, really opened my eyes to what a small team can do given a common goal they are passionate about, something that at times can be missing from enterprises that might have profit as king.
Let us know what you end up at. You never know if you might inspire another dozen people with something that interests you. Good luck!
I might have a few hours a month to help out if there’s something I feel I can help with.