• gedaliyah@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      “We always obey the robots.txt”

      • A bunch of corporations that have no accountability and plenty of incentive to just ignore it and have all been caught training AI on off-limits data.
    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I wonder what kind of contract they went with.

      https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/

      SAN FRANCISCO, Feb 21 (Reuters) - Social media platform Reddit has struck a deal with Google (GOOGL.O) , opens new tab to make its content available for training the search engine giant’s artificial intelligence models, three people familiar with the matter said.

      The contract with Alphabet-owned Google is worth about $60 million per year, according to one of the sources.

      For perspective:

      https://www.cbsnews.com/news/google-reddit-60-million-deal-ai-training/

      In documents filed with the Securities and Exchange Commission, Reddit said it reported net income of $18.5 million — its first profit in two years — in the October-December quarter on revenue of $249.8 million.

      So if you annualize that, Reddit’s seeing revenue of about $1 billion/year, and net income of about $74 million/year.

      Given that Reddit granting exclusive indexing to Google happened at about the same time, I would assume that that AI-training deal included the exclusivity indexing agreement, but maybe it’s separate.

      My gut feeling is that the exclusivity thing is probably worth more than $60 million/year, that Google’s probably getting a pretty good deal. Like, Google did not buy Reddit, and Google’s done some pretty big acquisitions, like YouTube, and that’d have been another way for Google to get exclusive access. So I’d think that this deal is probably better for Google than buying Reddit. Reddit’s market capitalization is $10 billion, so Google is maybe paying 0.6% the value of Reddit per year to have exclusive training rights to their content and to be the only search engine indexing them; aside from Reddit users themselves running into content in subreddits, I’d guess that those two forms are probably the main way in which one might leverage the content there.

      Plus, my impression is that the idea that a number of companies have – which may or may not be valid – is that this is the beginning of the move away from search engines. Like, the idea is that down the line, the typical person doesn’t use a search engine to find a webpage somewhere that’s a primary source to find material. Instead, they just query an AI. That compiles all the data that it can see and spits out an answer. Saves some human searcher time and reduces complexity, and maybe can solve some problems if AIs can ultimately do a better job of filtering out erroneous information than humans. We definitely aren’t there yet in 2024, but if that’s where things are going, I think that it might make a lot of strategic sense for Google. If Google can lock up major sources of training data, keep Microsoft out, then it’s gonna put Microsoft in a difficult spot if Microsoft is gunning for the same thing.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          If we do end up at a point without search engines, where AI does the search and summarizes an answer, what do you think their level of ability to tie back to source material will be?

          I haven’t used the text-based search queries myself; I’ve used LLM software, but not for this, so I don’t know what the current situation is like. My understanding is that current approach doesn’t really permit for it. And there are two issues with that:

          • There isn’t a direct link between one source and what’s being generated; the model isn’t really structured so as to retain this.

          • Many different sources probably contribute to the answer.

          All information contributes a little bit to the probability of the next word that the thing is spitting out. It’s not that the software rapidly looks through all pages out there and then finds a given single reputable source that could then cite, the way a human might. That is, you aren’t searching an enormous database when the query comes in, but repeatedly making use of a prediction that the next word in the correct response is a given word, and that probability is derived from many different sources. Maybe tens of thousands of people have made posts on a given subject; the response isn’t just a quote from one, and the generated text may appear in none of them.

          To maybe put that in terms of how a human might think, place you in the generative AI’s shoes, suppose I say to you “draw a house”. You draw a house with two windows, a flowerbed out front, whatever. I say “which house is that”? You can’t tell me, because you’re not trying to remember and present one house – you’re presenting me with a synthetic aggregate of many different houses; probably all houses have mentally contributed a bit to it. Maybe you could think of a given house that you’ve seen in the past that looks a fair bit like that house, but that’s not quite what I’m asking you to tell me. The answer is really “it doesn’t reflect a single house in the real world”, which isn’t really what you want to hear.

          It might be possible to basically run a traditional search for a generated response to find an example of that text, if it amounts to a quote (which it may not!)

          And if Google produces some kind of “reliability score” for a given piece of material and weights the material in the training set by that (which I will guess that if they don’t now, they will), they could maybe use the reliability score to try to rank various sources when doing that backwards search for relevant sources.

          But there’s no guarantee that that will succeed, because they’re ultimately synthesizing the response, not just quoting it, and because it can come from many sources. There may potentially be no one source that says what Google is handing back.

          It’s possible that there will be other methods than the present ones used for generating responses in the future, and those could have very different characteristics. Like, I would not be surprised, if this takes off, if the resulting system ten years down the road is considerably more complex than what is presently being done, even if to a user, the changes under the hood aren’t really directly visible.

          There’s been some discussion about developing systems that do permit for this, and I believe that if you want to read up on it, the term used is “attributability”, but I have not been reading research on it.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Blocking other search engines will hurt Reddit, all else held equal. But not by that much. Google is seriously dominant in the search engine market.

      kagis

      Yeah.

      https://gs.statcounter.com/search-engine-market-share

      According to this, Google has 91.06% of the search engine market. So for Reddit, they’re talking about cutting themselves off from a little under 9% of people searching out there. Which…I mean, it isn’t insignificant, but it isn’t likely gonna hurt them all that badly.

      • eronth@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        It’s also worth noting that the 9% they cut off was probably the group more inclined to already be using alternatives to Reddit anyways.

          • whatwhatwhatwhat@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            26 days ago

            Seconding this. I work in IT, and the number of tech-illiterate people using DuckDuckGo as their default search engine is astounding. It’s got to be about 10% of our users (none of whom are in tech roles).

  • Azzu@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I wish Lemmy were searchable better. The search function actually works decently well, but it’s not on the same level of actual search engines, it doesn’t seem to look for related/similar terms and also relevancy doesn’t seem right.

    • gedaliyah@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I do occasionally find Lemmy in web search results. The platform is not that big (or old), but as long as it sticks around then eventually searchability will improve.

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Hi, I’m new here. Because of the bullshit with Reddit. Greetings fellow Lemmy people.

  • z3rOR0ne@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I’ve posted this elsewhere, but it bears repeating:

    Just use ddg bangs if you use Duckduckgo and you can search reddit directly.

    !reddit search term
    

    or:

    !r search term
    

    It still picks up latest posts related to reddit, it just searches reddit directly instead of searching Bing’s results. It’s that simple.

    You can even use a redirect extension like Libredirect in conjunction with this Duckduckgo feature to redirect your search to a privacy respecting frontend like redlib.

      • lennivelkant@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I used to sneer at the kids in my class that used it. Must have been fairly shortly after it launched, something like fourteen to fifteen years ago. I’m still grappling with a certain inertia when it comes to switching away from something I have relied on for so long, but I’m coming around to the idea of giving DDG a try at least (irrational as it is, I’ve been reluctant to even try - I suspect out of fear of liking it and having to change).

        Past Me would be exasperated that Present Me is even toying with the idea. But then, Past Me had a lot of stupid takes anyway.

        • unconfirmedsourcesDOTgov@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I went through the same process that you’re describing. In the end, I gave it a shot and, anecdotally, I feel like I find the things I’m looking for faster than I was with Google and with no shoddy ai summaries.

          • noli@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I like to say that DDG gives you what you searched for while google gives you what it thinks you wanted.

  • Mnemnosyne@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m kind of curious to understand how they’re blocking other search engines. I was under the impression that search engines just viewed the same pages we do to search through, and the only way to ‘hide’ things from them was to not have them publicly available. Is this something that other search engines could choose to circumvent if they decided to?

    • Madis@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Search engine crawlers identify themselves (user agents), so they can be prevented by both honor-based system (robots.txt) and active blocking (error 403 or similar) when attempted.

      • Mnemnosyne@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Thank you, I understand better now. So in theory, if one of the other search engines chose to not have their crawler identify itself, it would be more difficult for them to be blocked.

        • tb_@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          This is where you get into the whole webscraping debate you also have with LLM “datasets”.

          If you, as a website host, are detecting a ton of requests coming from a singular IP you can block said address. There are ways around that by making the requests from different IP addresses, but there are other ways to detect that too!

          I’m not sure if Reddit would try to sue Microsoft or DDG if they started serving results anyway through such methods. I don’t believe it is explicitly disallowed.
          But if you were hoping to deal in any way with Reddit in the future I doubt a move like this would get you in their good graces.

          All that is to say; I won’t visit Reddit at all anymore now that their results won’t even show up when I search for something. This is a terrible move and will likely fracture the internet even more as other websites may look to replicate this additional source of revenue.

  • Babalugats@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    They’re also blocking posts by users who aren’t banned or even got a warning. It appears to the user as though it’s been posted, but it hasn’t.

  • KroninJ@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    It’s still possible to search with “site:reddit.com …”

    Has it been implemented yet or are they blocking non-flagged searches? Which seems odd.

    • tb_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      You shouldn’t be getting any new results if you do that, older posts will/may remain indexed.