Thats not really virtue signaling unless the org is using it for PR reasons, it’s just asking others to respect your wishes in a cooperative community sense rather than a legal demand. This is a more technical side of things than the politics everyone injects.
Im a proponent supporting these LLaMA systems, they are really just the next iteration of Search systems. Just like with search engines, they use traffic and server time with queries and its good manners for everyone to follow the robots.txt limits of every site, but the freedom is still inherent under an open internet that a third party can read the site for whatever reasons. If you dont want take part in the open community part of the open internet of the world, you don’t have to expose anything at all to public access that can be scraped.
I rarely read paywalled news sites because they opt to not to be part of the open community of information sharing that our open internet represents.
A bit of a tangent, but I’ve recently shifted my focus to reading content behind paywalls and have noticed a significant improvement in the quality of information compared to freely accessible sources. The open internet does offer valuable content, but there’s often a notable difference in journalistic rigor when a subscription fee is involved. I suspect that this disparity might contribute to the public’s vulnerability to disinformation, although I haven’t fully explored that theory.
Exactly, so this is purely performative on their part. Businesses shouldn’t virtue signal, its a little pathetic.
Companies do honour robots.txt. Maybe not the small project made by some guy somewhere. But large companies do.
Thats not really virtue signaling unless the org is using it for PR reasons, it’s just asking others to respect your wishes in a cooperative community sense rather than a legal demand. This is a more technical side of things than the politics everyone injects.
Im a proponent supporting these LLaMA systems, they are really just the next iteration of Search systems. Just like with search engines, they use traffic and server time with queries and its good manners for everyone to follow the robots.txt limits of every site, but the freedom is still inherent under an open internet that a third party can read the site for whatever reasons. If you dont want take part in the open community part of the open internet of the world, you don’t have to expose anything at all to public access that can be scraped.
I rarely read paywalled news sites because they opt to not to be part of the open community of information sharing that our open internet represents.
Except it doesn’t credit the source nor direct traffic. So… almost an entirely different beast.
That depends fully on the implementation. Bing does give you sources, but chatgpt generates “original content” based on all the shit it’s scraped
A bit of a tangent, but I’ve recently shifted my focus to reading content behind paywalls and have noticed a significant improvement in the quality of information compared to freely accessible sources. The open internet does offer valuable content, but there’s often a notable difference in journalistic rigor when a subscription fee is involved. I suspect that this disparity might contribute to the public’s vulnerability to disinformation, although I haven’t fully explored that theory.