• palordrolap
    link
    fedilink
    1801 year ago

    Put something in robots.txt that isn’t supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.

    Imperfect, but can’t think of a better solution.

    • Lvxferre [he/him]
      link
      fedilink
      English
      98
      edit-2
      1 year ago

      Good old honeytrap. I’m not sure, but I think that it’s doable.

      Have a honeytrap page somewhere in your website. Make sure that legit users won’t access it. Disallow crawling the honeytrap page through robots.txt.

      Then if some crawler still accesses it, you could record+ban it as you said… or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.

      • @CosmicTurtle@lemmy.world
        link
        fedilink
        English
        401 year ago

        I think I used to do something similar with email spam traps. Not sure if it’s still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.

        Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.

        I’d love to see something similar with robots.

        • Lvxferre [he/him]
          link
          fedilink
          English
          23
          edit-2
          1 year ago

          Yup, it’s the same approach as email spam traps. Except the naughty list, but… holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.

        • Lvxferre [he/him]
          link
          fedilink
          English
          41 year ago

          For banning: I’m not sure but I don’t think so. It seems to me that prefetching behaviour is dictated by a page linking another, to avoid any issue all that the site owner needs to do is to not prefetch links for the honeytrap.

          For poisoning: I’m fairly certain that it doesn’t. At most you’d prefetch a page full of rubbish.

    • @PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      English
      131 year ago

      Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.

      I’ve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.

      • @CileTheSane@lemmy.ca
        link
        fedilink
        English
        21 year ago

        There are tools that just flag you as having gotten an achievement on Steam, you don’t even have to have the game open to do it. I’d hardly call that ‘hacking’.

    • @Ultraviolet@lemmy.world
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model. Something it will recognize as language and internalize, but severely degrade the quality of its output.

    • Aatube
      link
      fedilink
      -271 year ago

      robots.txt is purely textual; you can’t run JavaScript or log anything. Plus, one who doesn’t intend to follow robots.txt wouldn’t query it.

      • @BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        441 year ago

        If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:

        here-there-be-dragons.html
        

        Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.

      • @ShitpostCentral@lemmy.world
        link
        fedilink
        English
        111 year ago

        You’re second point is a good one, but you absolutely can log the IP which requested robots.txt. That’s just a standard part of any http server ever, no JavaScript needed.

      • @ricecake@sh.itjust.works
        link
        fedilink
        English
        91 year ago

        People not intending to follow it is the real reason not to bother, but it’s trivial to track who downloaded the file and then hit something they were asked not to.

        Like, 10 minutes work to do right. You don’t need js to do it at all.

  • Cosmic Cleric
    link
    fedilink
    English
    1161 year ago

    As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.

    Honestly it seems like in all aspects of society the social contract is being ignored these days, that’s why things seem so much worse now.

  • Optional
    link
    fedilink
    English
    991 year ago

    Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.

    • @Ithi@lemmy.ca
      link
      fedilink
      English
      31 year ago

      Only if you’re already rich or in the right social circles though. Everyone else gets fined/jail time of course.

  • @moitoi@feddit.de
    link
    fedilink
    English
    711 year ago

    Alternative title: Capitalism doesn’t care about morals and contracts. It wants to make more money.

    • AutistoMephisto
      link
      fedilink
      English
      131 year ago

      Exactly. Capitalism spits in the face of the concept of a social contract, especially if companies themselves didn’t write it.

      • @WoodenBleachers@lemmy.world
        link
        fedilink
        English
        -21 year ago

        Capitalism, at least, in a lassie-faire marketplace, operates on a social contract, fiat money is an example of this. The market decides, the people decide. Are there ways to amass a certain amount of money to make people turn blind eyes? For sure, but all systems have their ways to amass power, no matter what

    • @gapbetweenus@feddit.de
      link
      fedilink
      English
      51 year ago

      Capitalism is a concept, it can’t care if it wanted and it even can’t want to begin with. It’s the humans. You will find greedy, immoral ones in every system and they will make it miserable for everyone else.

      • @Aceticon@lemmy.world
        link
        fedilink
        English
        11
        edit-2
        1 year ago

        Capitalism is the widelly accepted self-serving justification of those people for their acts.

        The real problem is in the “widelly accepted” part: a sociopath killing an old lady and justifying it because “she looked funny at me” wouldn’t be “widelly accepted” and Society would react in a suitable way, but if said sociopath scammed the old lady’s pension fund because (and this is a typical justification in Investment Banking) “the opportunity was there and if I didn’t do it somebody else would’ve, so better be me and get the profit”, it’s deemed “acceptable” and Society does not react in a suitable way.

        Mind you, Society (as in, most people) might actually want to react in a suitable way, but the structures in our society are such that the Official Power Of Force in our countries is controlled by a handful of people who got there with crafty marketing and backroom plays, and those deem it “acceptable”.

        • @gapbetweenus@feddit.de
          link
          fedilink
          English
          71 year ago

          People will always find justification to be asholes. Capitalism tried to harvest that energy and unleashed it’s full potential, with rather devastating consequences.

          • @Chee_Koala@lemmy.world
            link
            fedilink
            English
            11 year ago

            Sure, but think-structures matter. We could have a system that doesn’t reward psychopathic business choices (as much), while still improving our lives bit by bit. If the system helps a bit with making the right choices, that would matter a lot.

            • @gapbetweenus@feddit.de
              link
              fedilink
              English
              11 year ago

              That’s basically what I wrote, (free) market economy especially in combination with credit based capitalism gives those people a perfect combination of a system to thrive in. This seems to result in very fast progress and immense wealth, which is not distributed very equally. Than again, I prefer Besos and Zuckerberg as CEOs rather than politicians or warlords. Dudes with big Egos and Ambitions need something productive to work on.

        • @Katana314@lemmy.world
          link
          fedilink
          English
          -21 year ago

          It’s deemed “acceptable”? A sociopath scamming an old lady’s pension is basically the “John Wick’s dog” moment that leads to the insane death-filled warpath in recent movie The Beekeeper.

          This is the kind of edgelord take that routinely expects worse than the worst of society with no proof to their claims.

          • @Aceticon@lemmy.world
            link
            fedilink
            English
            41 year ago

            This is the kind of shit I saw from the inside in Investment Banking before and after the 2008 Crash.

            None of those assholes ever gets prison time for the various ways in which they abuse markets and even insider info for swindeling amongst other Pension Funds, so de facto the Society we have with the power structures it has, accepts it.

  • @rtxn@lemmy.world
    link
    fedilink
    English
    67
    edit-2
    1 year ago

    I would be shocked if any big corpo actually gave a shit about it, AI or no AI.

    if exists("/robots.txt"):
        no it fucking doesn't
    
    • @bionicjoey@lemmy.ca
      link
      fedilink
      English
      351 year ago

      Robots.txt is in theory meant to be there so that web crawlers don’t waste their time traversing a website in an inefficient way. It’s there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.

    • BargsimBoyz
      link
      fedilink
      English
      21 year ago

      Yeah I always found it surprising that everyone just agreed to follow a text file on a website on how to act. It’s one of the worst thought out/significant issues with browsing still out there from the beginning pretty much.

  • circuitfarmer
    link
    fedilink
    English
    591 year ago

    Most every other social contract has been violated already. If they don’t ignore robots.txt, what is left to violate?? Hmm??

    • BlanketsWithSmallpox
      link
      fedilink
      English
      381 year ago

      It’s almost as if leaving things to social contracts vs regulating them is bad for the layperson… 🤔

      Nah fuck it. The market will regulate itself! Tax is theft and I don’t want that raise or I’ll get in a higher tax bracket and make less!

      • @Jimmyeatsausage@lemmy.world
        link
        fedilink
        English
        10
        edit-2
        1 year ago

        This can actually be an issue for poor people, not because of tax brackets but because of income-based assistance cutoffs. If $1/hr raise throws you above those cutoffs, that extra $160 could cost you $500 in food assistance, $5-$10/day for school lunch, or get you kicked out of government subsidied housing.

        Yet another form of persecution that the poor actually suffer and the rich pretend to.

      • @SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        61 year ago

        And then the companies hit the “trust thermocline”, customers leave them in droves and companies wonder how this could’ve happened.

      • Ogmios
        link
        fedilink
        English
        -121 year ago

        Yea, because authoritarianism is well known to be sooooo good for the layperson.

      • @wise_pancake@lemmy.ca
        link
        fedilink
        English
        42
        edit-2
        1 year ago

        robots.txt is a file available in a standard location on web servers (example.com/robots.txt) which set guidelines for how scrapers should behave.

        That can range from saying “don’t bother indexing the login page” to “Googlebot go away”.

        IT’s also in the first paragraph of the article.

      • @mrnarwall@lemmy.world
        link
        fedilink
        English
        141 year ago

        Robots.txt is a file that is is accessible as part of an http request. It’s a backend configuration file that sets rules for what automatically running web crawlers are allowed. It can set both who is and who isn’t allowed. Google is usually the most widely allowed domain for bots just because their crawler is how they find websites for search results. But it’s basically the honor system. You could write a scraper today that goes to websites that it is being told it doesn’t have permission to view this page, ignore it, and still get the information

  • kingthrillgore
    link
    fedilink
    English
    191 year ago

    I explicitly have my robots.txt set to block out AI crawlers, but I don’t know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i’ve been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.

    • @phx@lemmy.ca
      link
      fedilink
      English
      30
      edit-2
      1 year ago

      The funny (in an “wtf” not “haha” sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn’t, even after doing responsible disclosure.

      Meanwhile, companies completely ignore the standard mentions to say “you are not allowed to scape this data” and then use OUR content/data to build up THEIR datasets, including AI etc.

      That’s not a “violation of a social contract” in my book, that’s violating the terms of service for the site and essentially infringement on copyright etc.

      No consequences for them though. Shit is fucked.

  • @lily33@lemm.ee
    link
    fedilink
    English
    181 year ago

    What social contract? When sites regularly have a robots.txt that says “only Google may crawl”, and are effectively helping enforce a monolopy, that’s not a social contract I’d ever agree to.

  • 𝐘Ⓞz҉
    link
    fedilink
    English
    151 year ago

    No laws to govern so they can do anything they want. Blame boomer politicians not the companies.

  • @Ascend910@lemmy.ml
    link
    fedilink
    English
    111 year ago

    This is a very interesting read. It is very rarely people on the internet agree to follow 1 thing without being forced

  • KillingTimeItself
    link
    fedilink
    English
    01 year ago

    hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.

      • KillingTimeItself
        link
        fedilink
        English
        -11 year ago

        i mean yeah, but at a certain point you just have to accept that it’s going to be crawled. The obviously negligent ones are easy to block.

  • @masonlee@lemmy.world
    link
    fedilink
    English
    -31 year ago

    Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)

    • IndescribablySad@threads.net
      link
      fedilink
      English
      71 year ago

      If it makes you feel any better, my bet is still on nuclear holocaust or complete ecological collapse resulting from global warming to be our undoing. Given a choice, I’d prefer nuclear holocaust. Feels less protracted. Worst option is weaponized microbes or antibiotic resistant bacteria. That’ll take foreeeever.

      • @masonlee@lemmy.world
        link
        fedilink
        English
        21 year ago

        100%. Autopoietic computronium would be a “best case” outcome, if Earth is lucky! More likely we don’t even get that before something fizzles. “The Vulnerable World Hypothesis” is a good paper to read.

    • @lunarul@lemmy.world
      link
      fedilink
      English
      21 year ago

      That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.

        • @lunarul@lemmy.world
          link
          fedilink
          English
          21 year ago

          https://www.lifewire.com/strong-ai-vs-weak-ai-7508012

          Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies

          Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.

          https://en.m.wikipedia.org/wiki/Artificial_general_intelligence

          As of 2023, complete forms of AGI remain speculative.

          Boucher, Philip (March 2019). How artificial intelligence works

          Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.

          https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf

          AGI represents a level of power that remains firmly in the realm of speculative fiction as on date

          • @masonlee@lemmy.world
            link
            fedilink
            English
            11 year ago

            Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.

            • @lunarul@lemmy.world
              link
              fedilink
              English
              21 year ago

              See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.

              • @masonlee@lemmy.world
                link
                fedilink
                English
                11 year ago

                Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)

                That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!

                • @lunarul@lemmy.world
                  link
                  fedilink
                  English
                  11 year ago

                  Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.

                  Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines

                  What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.

            • conciselyverbose
              link
              fedilink
              11 year ago

              This is like saying putting logs on a fire is “one or two breakthroughs away” from nuclear fusion.

              LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It’s a dead end, and a bad one.

    • @glukoza@lemmy.dbzer0.com
      link
      fedilink
      English
      01 year ago

      Ah, AI doesn’t pose as danger in that way. It’s danger is in replacing jobs, people getting fired bc of ai, etc.

        • @glukoza@lemmy.dbzer0.com
          link
          fedilink
          English
          01 year ago

          Yeah I’m not for UBI that much, and don’t see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it’s just scifi.

          • @masonlee@lemmy.world
            link
            fedilink
            English
            11 year ago

            Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.