It’s plausible but unlikely I think, putting a lot of faith into shitty pinhole cameras to be able to see twenty two 4K pixels one hex value lighter or darker, when most cameras have atrocious definition/sharpness and get blown out by light, blinded by darkness. I dunno, this reminds me of the screaming around Microsoft Kinect in 2013. They had bad and shitty plans for Kinect but, cheap hardware everyone hated Idk.
There exists a technology that takes elements in a picture, like a bird in the background, a character, a glass of water, etc and moves them just a few pixels. You can encode a lot of data like that and it’s undetectable given just one example. They can encode your unique user identifier 1000 times in even a short video. A camera is bound to pick up at least part of it each time.
putting a lot of faith into shitty pinhole cameras to be able to see twenty two 4K pixels one hex value lighter or darker, when most cameras have atrocious definition/sharpness and get blown out by light, blinded by darkness.
I guess if the TV itself was doing the DRM recognition? Idk though, I’ve seen alarmist posting like this before… seems to me evil tech shit usually gets done in more mundane ways?
Its definitely possible and even trivial to do there are a thousand ways to encode just a few bytes of data undetectably in a video and nothing but motivation stopping them from using every one every where. I think it’s plenty mundane and even trivial for what they get.
In every frame, easily identifiable by a shitty pinhole camera though?
I updated my comment with more details
It’s plausible but unlikely I think, putting a lot of faith into shitty pinhole cameras to be able to see twenty two 4K pixels one hex value lighter or darker, when most cameras have atrocious definition/sharpness and get blown out by light, blinded by darkness. I dunno, this reminds me of the screaming around Microsoft Kinect in 2013. They had bad and shitty plans for Kinect but, cheap hardware everyone hated Idk.
I feel like if you just slightly turn up the compression ratio then all that nuance is lost making the watermark nonexistent or unusable
Yes especially since Netflix in particular has atrocious compression.
There exists a technology that takes elements in a picture, like a bird in the background, a character, a glass of water, etc and moves them just a few pixels. You can encode a lot of data like that and it’s undetectable given just one example. They can encode your unique user identifier 1000 times in even a short video. A camera is bound to pick up at least part of it each time.
Quotin’
I guess if the TV itself was doing the DRM recognition? Idk though, I’ve seen alarmist posting like this before… seems to me evil tech shit usually gets done in more mundane ways?
Its definitely possible and even trivial to do there are a thousand ways to encode just a few bytes of data undetectably in a video and nothing but motivation stopping them from using every one every where. I think it’s plenty mundane and even trivial for what they get.