I still don’t see how adding a tiny little line of code to robots.txt equals OpenAI can no longer scrape data. Seems like they can still manually go in and harvest info manually right? and that’s not exactly a large list of companies.
That’s honor system, for sure. OpenAI has promised that their bots will honor this line in robots.txt. But unless these companies have implemented some detect-and-block method on their own, there is nothing physically stopping the bots from gathering data anyway.
deleted by creator
Exactly, so this is purely performative on their part. Businesses shouldn’t virtue signal, its a little pathetic.
Thats not really virtue signaling unless the org is using it for PR reasons, it’s just asking others to respect your wishes in a cooperative community sense rather than a legal demand. This is a more technical side of things than the politics everyone injects.
Im a proponent supporting these LLaMA systems, they are really just the next iteration of Search systems. Just like with search engines, they use traffic and server time with queries and its good manners for everyone to follow the robots.txt limits of every site, but the freedom is still inherent under an open internet that a third party can read the site for whatever reasons. If you dont want take part in the open community part of the open internet of the world, you don’t have to expose anything at all to public access that can be scraped.
I rarely read paywalled news sites because they opt to not to be part of the open community of information sharing that our open internet represents.
next iteration of Search systems.
Except it doesn’t credit the source nor direct traffic. So… almost an entirely different beast.
That depends fully on the implementation. Bing does give you sources, but chatgpt generates “original content” based on all the shit it’s scraped
A bit of a tangent, but I’ve recently shifted my focus to reading content behind paywalls and have noticed a significant improvement in the quality of information compared to freely accessible sources. The open internet does offer valuable content, but there’s often a notable difference in journalistic rigor when a subscription fee is involved. I suspect that this disparity might contribute to the public’s vulnerability to disinformation, although I haven’t fully explored that theory.
Companies do honour robots.txt. Maybe not the small project made by some guy somewhere. But large companies do.
Robots.txt never was any hurdle, it’s just a flag which you are free to ignore.
Why wouldnt they? It’s totally legal to write up something that visits pages in a genuine browser and takes all the content from the page source.
This attitude right here is my point. Thanks for unintentionally making my point for me ;)
legal =/= right
I’m so tired of people thinking the boundaries of the law are the boundaries of whats socially acceptable; it isn’t. The boundary of laws is where we get so fed up with you that we arrest your ass. the grey are in the middle where you are a shit human but not illegal, is not a place to brag about being.
I was intentionally making your point!
And the boundary of laws, where fines are the punishment, is simply wealth.
If breaking a law whose punishment is a fine earns me more than the fine sets me back, then it’s a no brainier, it’s profit.
Not if there’s a EULA forbidding it. That’s part of the reason for robot.txt. it’s sort of the agreement bots have to pass through versus the one a person sees.
and if you mark those in this list that are using ai, how long til the venn diagram is a circle?