• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    edit-2
    27 days ago

    This problem is hardly an issue on this platform.

    And this is the problem.

    I see objectively misleading, clickbait headlines and articles from bad (eg not recommended by Wikipedia) sources float to the top of Lemmy all the time.

    I call them out, but it seems mods are uninterested in enforcing more strict information hygiene.

    Step 1 is teaching journalism and social media hygiene as a dedicated class in school, or on social media… And, well, the US is kinda past that being possible :/.

    There might be hope for the rest of the world.

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      27 days ago

      Most of the misinformation I regularly find on top are statements made by the US president or his administration – and these are news reports in an appropriate context with appropriate commentary by Lemmy users. Occasionally, very rarely, I have also seen misinformation about the US president, but I don’t see that as much of a problem.

      Rather, I see it as a very serious problem that the US president himself and his administration are massively spreading misinformation. That is what my question refers to.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        27 days ago

        With no offense/singling out intended, this is what I’m talking about.

        You (and many others) are interested in misinformation from MAGA, but not from misreported news on MAGA. But it’s these little nuggets that his media ecosystem pounces on and has gotten Trump to where is.

        And it’s exactly the same on the “other side.” The MAGA audience is combing the greater news ecosystem for misinformation like a hawk while turning a blind eye to their own.

        The answer is for everyone to have better information hygiene, and that includes shooting misleading down story headlines one might otherwise like. It means being critical of your own information stream as you read.

        • DandomRude@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          27 days ago

          So you think it’s okay for the US president to spread misinformation? You really don’t see a problem with that, even though you yourself talk about “information hygiene”?

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            27 days ago

            Of course not.

            But Trump’s going to do it and no one is going to stop him. And if we aren’t willing to look at, say, Lemmy and misleading upvoted posts, how can we possibly tell MAGA acolytes to do the same thing on a more extreme scale?

            • DandomRude@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              27 days ago

              Well, my question was about how to counter the constant misinformation spread by influential people like Trump (there are people like him in pretty much every country) – that’s why I mentioned other platforms, because Lemmy is completely irrelevant in this context due to its very limited reach.

              • brucethemoose@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                27 days ago

                Ah.

                Well IMO, we really can’t.

                I think the old adage of the internet applies: don’t feed the trolls. Trying to counter Trump just feeds his media machine with engagement, which is what got us here.

                In other words, there is no such thing as bad attention.

                Hence, I think we should focus our ire on the systems propping that up (like Big Tech’s engagement driven social media, profit above all news and such), not on Trump directly.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      26 days ago

      yeah, lemmy could stop pushing extreme leftist misinformation from mysterious online “news” sources and rewriting history that would be a great start

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        26 days ago

        That’s not what I meant. It’s true that too many left leaning tabloids get upvoted to the front page, but the direction of the slant isn’t the point, and there’s nothing “mysterious” about them. They’re clickbait/ragebait.

    • BrainInABox@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      26 days ago

      bad (eg not recommended by Wikipedia)

      If you want to know why misinformation is so prominent, the fact that you think this is a good standard is a big part of it.

      Step 1 is teaching journalism and social media hygiene as a dedicated class in school

      And will those classes be teaching “Wikipedia is the indisputable rock of factuality, the holy Scripture from which truth flows”?

        • BrainInABox@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          26 days ago

          It’s not of course, but it’s a good start. Certainly good enough to use as a quick but fallible reference:

          No, it really isn’t. The fact that Wikipedia has been arbitrarily vested with such supreme authority to be the default source of truth by so many people is a big part of why misinformation is so common. Back in my day, even high schoolers were taught not to do that.

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            26 days ago

            Yes, I remember too. We were specifically told not to use Wikipedia.

            Then information hygiene went to shit. Now it’s a rare oasis in the current landscape.

            Look, I’m not saying to start referencing Wikipedia in scholarly journals or papers. But it’s more accessible than some JSTOR database and way above average, and more of the population using it would be a wonderful thing. The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.

            • BrainInABox@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              26 days ago

              Then information hygiene went to shit. Now it’s a rare oasis in the current landscape.

              It went to shit because people started treating low quality sources like Wikipedia as “a rare oasis”.

              The vast majority of the time, Wikipedia is not the source of misinformation/disinformation in this world.

              Are you sure about that?

              • brucethemoose@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                26 days ago

                …You’re kidding, right?

                I’m looking around the information landscape around me, and Wikipedia is not even in the top 1000 of disinformation peddlers. They make mistakes, but they aren’t literally lying and propagandizing millions of people on purpose.

                • BrainInABox@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  26 days ago

                  and Wikipedia is not even in the top 1000 of disinformation peddlers.

                  And you determined this how?

                  They make mistakes, but they aren’t literally lying and propagandizing millions of people on purpose.

                  And you determined this how?

  • CrayonDevourer@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    27 days ago

    This problem is hardly an issue on this platform.

    LOLOL – This platform is just as bad as Reddit for misinformation. It’s usually silly shit, but it’s almost always 90% truth laced with 10% lie. The fact that you believe it’s somehow immune to this is just testament to how hard it is for people to see this kind of thing clearly when it’s “on their side”. Problem is, any time it’s called out, people get massively downvoted for it, so people have stopped calling it out.

      • jeffw@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        27 days ago

        As a mod for a couple of the biggest communities… gestures to everything

      • JollyG@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        27 days ago

        Recently there was a news story about how people earning 150k were struggling financially. Even just reading the article was enough to know the idea was bullshit (which is probably why the headline used such mealy-mouthed language). But that did not stop a bunch of users from prognosticating about how terrible the economy is and how we are on the verge of collapse.

        The idea that households earning more than 150k are struggling is objectively wrong. They are not. But that idea is consistent with the political sentiments of users here ( billionaires vs everyone else in a zero sum economy ) so it gets traction.

        People pass around trash sources like the new republic which often just copies other news outlets but reframes stories to be consistent with lefty sentiments about whatever current events are going on.

        In one community I encountered an image macro criticizing a judge for making a ruling against some plaintiffs suing Trump that was completely divorced from any context, making it appear the judge was in the tank for trump when, if you knew even a little about her, or the ruling you would immediately recognize that idea as bullshit.

        Those are just a few examples off the top of my head

      • CrayonDevourer@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        27 days ago

        Easily the one I see the most is Trump talking about “they rigged the election and now I’m here.” – I’m pointing out this one specifically, because any dunderhead dipshit knows from context what he’s talking about, but lemmy absolutely dives into the shallow end with it…

        He’s clearly making the claim that Dems rigged the 2020 election, and because of that, he’s president in 2024 when … I dunno - whatever 2 events are happening. (Fifa or some shit?) But EVERY fucking time on Lemmy it’s like “See he’s admitting he rigged the election!” and everyone just meep meeps into agreeance.

        That’s just one off the top of my head, and that’s with blocking most politics-based subs. If lemmy can’t even read or gather context from a sentence correctly – There’s no hope for the world.

  • 𞋴𝛂𝛋𝛆@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    27 days ago

    I look at any individual’s history when they post anything sketchy and contextualize. Anything politically motivated is likely a shill unless they have a long broadly engaged post history across many subjects with depth. I block a lot of people too.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      27 days ago

      Do me! I’d honestly be interested in a report. I’m obviously not a bot, but what can you glean from my posts?

      • garbagebagel@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        26 days ago

        I generally tag people when they say things I either really enjoy or very strongly disagree with. You’ve said a lot of things I very strongly disagree with including things I have found to be right-wing and gun-minded according to the tag I picked for you.

        It’s not a complete and probably not an accurate judge of your character, I just choose to do this so that I don’t have to engage in conversations with people where I feel it will be a waste of my energy.

    • Kalcifer@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      26 days ago

      I look at any individual’s history when they post anything sketchy and contextualize. […]

      I am concerned that this would distill down to argumentum ad hominem.

    • BrainInABox@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      26 days ago

      Anything politically motivated is likely a shill

      Do you apply this to any political content? Or just politics you disagree with?

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    27 days ago

    step 1. misinformation is a problem on every platform. full stop.

    I think what you mean is maliciously manufactured information. still, I believe Lemmy is subject to it.

    I believe that both types can be effectively dispatched by effectively moderating the community, but not in the sense that you might be thinking.

    I believe that we are looking at community moderation from the wrong direction. today, the goal of the mod is to prune and remove undesired content and users. this creates high overhead and operational costs. it also increases chances for corruption and community instability. look no further than Reddit and lemmy for this where we have a handful of mods that are in-charge of multiple communities. who put them there? how do you remove them should they no longer have the communities best interests in mind? what power do I have as a user to bring attention to corruption?

    I believe that if we flip the role of moderators to be instead guardians of what the community accepts instead of what they can see it greatly reduces the strain on mods and increases community involvement.

    we already use a mechanism of up/down vote. should content hit a threshold below community standards, it’s removed from view. should that user continue to receive below par results from inside the community, they are silenced. these par grades are rolling, so they would be able to interact within the community again after some time but continued abuse of the community could result in permanent silencing. should a user be unjustly silenced due to abuse, mod intervention is necessary. this would then flag the downvoters for abuse demerits and once a demerit threshold is hit, are silenced.

    notice I keep saying silenced instead of blocked? that’s because we shouldn’t block their access to content or the community or even let them know nobody is seeing their content. in the case of malicious users/bots. the more time wasted on screaming into a void the less time wasted on corrupting another community. in-fact, I propose we allow these silenced users to interact with each other where they can continue to toxify and abuse each other in a spiraling chain of abuse that eventually results in their permanent silencing. all the while, the community governs itself and the users hum along unaware of what’s going on in the background.

    IMO it’s up to the community to decide what is and isn’t acceptable and mods are simply users within that community and are mechanisms to ensure voting abuse is kept in check.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        26 days ago

        genuinely curious of how would they game it?

        of course there’s a way to game it, but I think it’s a far better solution than what social media platforms are doing currently and gives more options than figuratively amputate parts of community to save itself.

        • AA5B@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          26 days ago

          If I need 10 downvotes to make you disappear then I only need 10 Smurf accounts.

          At the same time, 10 might be a large portion of some communities while miniscule in others.

          I suppose you limit votes to those in the specific community, but then you’d have to track their activity to see if they’re real or just griefing, and track activity in relation to others to see if they’re independent or all grief together. And moderators would need tools to not only discover but to manage briefing, to configure sensitivity

          • GreenKnight23@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            26 days ago

            you’re right. the threshold is entirely dependent on the size of the community. it would probably be derived from some part of community subscribers and user interactions for the week/month.

            should a comment be overwhelmingly positive that would offset the threshold further.

            in regards to griefing, if a comment or post is overwhelmingly upvoted and hits the downvote threshold that’s when mods step in to investigate and make a decision. if it’s found to not break rules or is beneficial to the community all downvoters are issued a demerit. after so many demerits those users are silenced in the community and follow through typical “cool down” processes or are permanently silenced for continued abuse.

            the same could be done for the flip-side where comments are upvote skewed.

            in this way, the community content is curated by the community and nurtured by the mods.

            appeals could be implemented for users whom have been silenced and fell through the cracks, and further action could be taken against mods that routinely abuse or game the system by the admins.

            I think it would also be beneficial to remove the concept of usernames from content. they would still exist for administrative purposes and to identify problem users, but I think communities would benefit from the “double blind” test. there’s been plenty of times I have been downvoted just because of a previous interaction. also the same, I have upvoted because of a well known user or previous interaction with that user.

            it’s important to note this would change the psychological point of upvote and downvotes. currently they’re used in more of an “I agree with” or “I cannot accept that”. using the rules I’ve brought up would require users to understand they have just as much to risk for upvoting or downvoting content. so when a user casts their vote, they truly believe it’s in the interests of the community at large and they want that kind of content within the community. to downvote means they think the content doesn’t meet the criteria for the community. should users continue to arbitrarily upvote or downvote based on their personal preferences instead of community based objectivity, they might find themselves silenced from the community.

            it’s based on the principles of “what is good for society is good for me” and silences anyone in the community that doesn’t meet the standards of that community.

            for example, a community that is strictly for women wouldn’t need to block men. as soon as a man would self identify or share ideas that aren’t respondent to the community they would be silenced pretty quickly. some women might even be silenced but they would undoubtedly have shared ideas that were rejected by the community at large. this mimics the self-regulation that society has used for thousands of years IMO.

            I think we need to stop looking at social networks as platforms for the individuals and look at them as platforms for the community as a whole. that’s really the only way we can block toxicity and misinformation from our communities. undoubtedly it will create echo chambers

  • frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    27 days ago

    Simply leaves social media, or believe nothing on it.

    Academic books by experrs, peer-reviewed papers etc. are better.

    Wikipedia and podcast/interviews with real experts (not pundits, I mean experts) are good too.

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      27 days ago

      I’m only on Lemmy, but I don’t think my individual decision will make a difference—and unfortunately, I don’t think anyone should realistically expect it to.

      I think anyone who is already here has recognized the problem.

      • garbagebagel@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        26 days ago

        You can’t change the whole world but if you choose to point out misinformation among your real life group or in smaller communities, you can still make a difference.

    • BrainInABox@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      26 days ago

      Wikipedia and podcast/interviews

      If you’re want to know how misinformation got so prominent, look at this as a good start.

  • bsit@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    27 days ago

    If we want to go the route of the Responsibility of the Individual: Resolve to not get your political etc. news from social media. Draw a line for yourself: cool to get gaming news from random influencers online? Probably. News about global events? At this point might be better for most people’s mental health to ignore them and focus more locally. However, read how to read a book, make your best effort at finding a reputable news organization and check those for news if you must have them. On same vein, if you don’t read at least some article about an event being discussed on social media, DON’T COMMENT. Don’t engage with that post. If it really grabs at you, go find an article about it from a trusted source, and depending on how much it animates you, try to get a bigger picture of the event. Assume that vast majority of ALL CONTENT online is currently incentivized to engage you - to capture your attention, which is actually the most valuable asset you have. Where you put your attention will define how you feel about your life. It’s highly advicable to put it where you feel love.

    Responsibility of the Collective: Moving in hierarchies, we can start demanding that social media moderators (or whatever passes for those in any given site) prevent misinformation as much as possible. Try to only join communities that have mods that do this. Failing that, demand social media platforms prevent misinformation. Failing that, we can demand the government does more to prevent misinformation. All of those solutions have significant issues, one of them being they are all very incentivized to capture the attenttion of as many people as possible. Doesn’t matter what the exact motivation is - it could be a geneinly good one. A news organization uses social media tactics to get the views so that their actually very factual and dilligently compiled articles get the spread. Or, they could be looking to drive their political agenda - which they necessarily do anyway because desire to be factual and as neutral as possible is a stance as well. One that may run afoul of the interests of some government that doesn’t value freedom of press - which is very dangerous and you need to think hard for yourself how you feel about the idea of the government limiting what kind of information you can access. For the purposes of making this shorter, you can regard massive social media platforms as virtual governments too. In fact, it would be a good idea in general.

    The thing with misinformation is that many people who talk about it subtly think that they are above it themselves. They’re thinking that they know they’re not subject to propaganda and manipulation but it’s the other poor fools that need to be protected from it. It’s the Qanon and Antivaxxers. But you know better, you know how to dig deeper into massively complicated global topics and find out what the true and right opinion about them is. You can’t. Not even if we weren’t in the middle of multiple fucking information wars. You’d do well to focus on what you can know for sure, in your own experience. If you don’t like the idea of individual responsibility though, because “most people aren’t going to do it” - your best bet at getting a collective response is a group of individuals coming together under the same ideal. It’ll happen sooner or later anyway and there’s going to be plenty of suffering before either way.

    • BrainInABox@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      26 days ago

      we can start demanding that social media moderators (or whatever passes for those in any given site) prevent misinformation as much as possible.

      Yeah, but how are you expecting moderators to determine what is and isn’t misinformation?

      • bsit@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        26 days ago

        That’s one of the many issues with expecting a collective resolution. Question is: why do people feel they need to be able to discuss issues way beyond their understanding and personal experience online with others who also don’t know much about it? If actually done well, moderation is a full time job but nobody is interested in paying a bunch of online jannies to clean their space.

        That’s why I favor individual responsibility, and opting out of the possibility of being exposed to (or perpetuating) misinformation. Maybe in the future we can have forums for verified experts of a field, where regular people can have discussions with them and ask questions etc. But these would be moderated places where you do need to bring proof and sound arguments, not emotionally charged headlines.

        The stories and information posted on social media are artistic works of fiction and falsehood. Only a fool would take anything posted as fact.

  • Kalcifer@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    26 days ago

    [misinformation] is hardly an issue on this platform […]

    In my opinion, that statement of yours is, ironically, responsible for why there may be an issue with misinformation. You state it with certainty, yet you provide no source to back up your claim. It is my belief that this sort of conjecture is at the source of misinformation issues.

  • 9tr6gyp3@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    27 days ago

    It honestly just depends on how many steps you want. You’re going to have to figure out the logistics of taking them, first of all. Do you want to take a premade set of steps or would you rather mold/cast them onsite?

    Obviously concrete is heavy af, so if you are going to precast them, you might consider using less steps. The more steps you add, the heavier its going to be. Of course, this isn’t an issue if you have a heavy duty vehicle with a lift.

    Also, do you want rails on them? That will take extra time to set them in place.

    Some examples i would recommend would be something like these.

    Or maybe this

  • Kalcifer@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    26 days ago

    What concrete steps can be taken to combat misinformation on social media? […]

    Regarding my own content: I do my best to cite any claim that I make, no matter how trivial. If I make a statement for which I lack confidence in its veracity, I do my best to convey that uncertainty. I do my best to convey explicitly whether a statement is a joke, or sarcasm.

    Fundamentally, my approach to this issue is based on this quote:

    Rationality is not a character trait, it’s a process. If you fool yourself into believing that you’re rational by default, you open yourself up to the most irrational thinking. [1]

    Regarding the content of others: If I come across something that I believe to be false, I try to politely respond to it with a sufficiently and honestly cited statement explaining why I think it is false. If I come across something of unknown veracity/clarity, I try to politely challenge the individual responsible to clarify their intent/meaning.

    For clarity, I have no evidence to support that what I’m doing is an effective means to this end, but I want to believe that it’s helping in at least some small way.

    References
    1. Type: Comment. Author: “@The8BitPianist”. Publisher: [Type: Post (Video). Title: “On These Questions, Smarter People Do Worse”. Author: “Veritasium” (“@veritasium”). Publisher: YouTube. Published: 2024-11-04T16:48:03Z. URI: https://www.youtube.com/watch?v=zB_OApdxcno.]. Published: 2024-11-04T09:06:26Z. Accessed: 2025-03-29T07:48Z. URI: https://www.youtube.com/watch?v=zB_OApdxcno&lc=Ugy6vV7Z3EeFHkdfbHl4AaABAg.

    What concrete steps can be taken to combat misinformation on social media?

  • haloduder@thelemmy.clubBannedBanned from community
    link
    fedilink
    arrow-up
    2
    ·
    25 days ago

    Teach people how to cite appropriately.

    We learned how to do it in middle school, but I can tell most of my adult peers either didn’t pay attention or forgot.

  • FreshParsnip@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    I don’t know but the outright lies on Facebook are making me mad. People actually believe JK Rowling is suing HBO over the casting of Snape when in reality, she is helping produce the show and is fine with the casting

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    27 days ago

    IMO, the typical approach of using fact-checking services to rate the accuracy of sources is inevitably flawed: if a source (or a fact checker) builds a reputation for reliability, it will eventually be suppressed or subverted into exploiting its reputation for other purposes.

    A better option might be to treat all sources as potentially informative, but not at face value: rather, build a predictive model of each source, and treat as significant only those stories that deviate from prediction (i.e., stories that seem atypical for that source). Those are the stories most likely to convey information the source didn’t generate itself.

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      27 days ago

      That’s certainly a good point, but I’m less concerned with how to verify information than with how to counteract the constant flow of misinformation — especially on other platforms where misinformation is deliberately pushed, which is causing major problems in my home country alone.

      • BrainInABox@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        26 days ago

        How are you going to counter misinformation if you can’t determine what is and isn’t misinformation?

            • DandomRude@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              26 days ago

              What I meant was that my question wasn’t about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.

              I was more interested in how we can effectively and meaningfully contribute to countering the flood of misinformation on social media (such as Twitter or meta apps).

              The background to my question is the fact that this misinformation influences users’ opinions. I think, the US is the best example of where that can lead. Unfortunately, there are similar trends in my home country. Since I don’t want to be ruled by fascists, I thought I’d ask the community here what can be done.

              But apparently I didn’t phrase the question very well.

              • BrainInABox@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                26 days ago

                What I meant was that my question wasn’t about how to distinguish between reputable and unreliable sources – I think most Lemmy users are capable of doing that.

                Well that makes one of us. My experience is that most Lemmy users think Wikipedia was written by God himself.