This interview is 2 months old, but I haven’t seen it discussed so far and given the news about reddit’s new cryptobeaniebabies, here it goes. This is a critique of the tech hype cycle, LLMs, VR, the metaverse failure, NFTs and cryptocurrencies with a refreshing historical awareness of past attempts that failed, like second life and VR games. Adam Conover’s interviewee is Dan Olson https://www.vice.com/en/article/m7v5qq/meet-the-guy-who-went-viral-on-youtube-for-explaining-how-nfts-crypto-are-a-poverty-trap

  • Peanut@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    edit-2
    1 year ago

    Does nobody remember how utterly uninformed Conover’s previous takes on ai were? And I still know whole communities of people who basically live in vr. They are doing just fine.

    Look here if you just want to hate on tech and tech enthusiasts. Don’t look here for a reasoned and thoughtful conversation.

    Also can we stop trying to paint AI enthusiasts in a bad light by acting like everyone into AI is an NFT grifter?

    It’s intellectually dishonest.

    The way it’s usually presented would make you think we have Yann LeCun and Melanie Mitchell in full fratboy drip promoting their NFTs.

    • floridaman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      1 year ago

      Tech enthusiast here! If you say AI in any context of current technology you’re either severely uninformed or grifting. There is no AI, it’s all machine learning. Adam may have been in the uninformed group, but he’s still been more right about “AI” than anyone else recently. Machine learning ≠ Artificial Intelligence, and if you think otherwise, you don’t know what you’re talking about. Read some of the white papers, and don’t use the term AI, please.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most companies making the more consumer facing applications are barely publishing papers these days, and when they do they’re often lacking important information/materials which would be required to recreate them (datasets, models, training parameters, etc.). Somehow Facebook has become the gold standard of open development.

      • Peanut@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 year ago

        note that my snarky tone in this response is due to befuddlement and not an intent to insult or argue with you.

        .

        what a weirdly strict semantic requirement that you are emphasising as law. it’s a good thing you are emphasising it so strongly, or we might see people use it while interviewing the guy who wrote the book on generative deep learning

        or see it used in silly places like MIT or stanford.

        what kind of grifter institutions would be so unprofessional?

        oh no, melanie mitchell is using a header saying that she “writes about AI.” are you really suggesting melanie mitchell is uninformed?

        or… yann lecun? “Researcher in AI.”

        do you know who yann lecun is? do you know what back-propagation is?

        these are some of the most respectable and well known names in the field. these were the first few darts i threw, and i’m unsurprised that i’m hitting bullseyes. i’m sure i could find many more examples if i kept going.

        maybe you’re assuming any use of AI means AGI, but most people i know of in the field just say “AGI” when talking about AGI.

        if you don’t like how non-specific it is in definition and use, that’s fine, and there’s an argument to be made there, but you’re stating your opinion and preference as consensus in the field that the term should just never be used.

        i think your enthusiasm needs to run a little deeper before being so critical. the intense yet uninformed nature of your opinion would also explain how you find that adam has “still been more right about “AI” than anyone else recently.”

        what white papers am i missing that emphasise this rule so vehemently?

        • floridaman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          tl;dr, you’re right but I’m more concerned about the media and lay people misusing the term, and it just annoys me personally too

          I am sorry, I should’ve been more clear, and I do genuinely appreciate your comment as it is very useful information. I am referring to lay people using the term ‘AI’. These institutions and individuals, who I agree are the most trusted on the topic, use the term because it is becoming the term broadly recognized. I have no problem with these people using the term, because they are aware of the context they use it in. I have a problem with people who are not versed in the topic using it, and that is what I agree with Connover on. When the media, and more broadly the general public, use the term AI, they are usually considering it to be a big spooky thing that could come for everyone’s jobs, and especially on the Internet the term is used to perpetuate a grift. I don’t wish to argue either, I just wanted to append this to clear up what I meant. And don’t get me wrong, I’m no expert on the topic, as much as I made it sound like that, I’m just tired of the media and the actual grifters misusing the term without actually understanding the connotations. Artificial Intelligence sounds like something that can replace a human mind, and to some extent, a lot of generative LLMs can do that, but they aren’t intelligent, they are just large algorithmic guessing machines, and using AI just feels to me to be misleading in that sense. I know this comes down to personal opinion in the end, at least on my part, but you are right, and I’m just tired of hearing people spout about AI like it’s some existential threat as it is now.

          Wow that ran on, sorry if you read the whole thing lol. Also you don’t gotta reply, I just wanted to clarify what I really meant to say.

          Also ETA: sorry I sounded so snarky, I’m just so tired of this whole topic

          • Peanut@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            replying despite your warning. i also won’t be offended if you don’t read. and the frustration is fair.

            TLDR: intelligence is weird, complex, and abstract. it is very difficult for us to comprehend the complex nature of intelligence alien to our own. the human mind is a very specific combination of different intelligent functions.

            funny you mention about the technology not being an existential threat, as the two researchers that i’d mentioned were recently paired at the monk debate arguing against the “existential threat” narrative.

            getting into the deep end of the topic, i think most with a decent understanding of it would agree it is a form of “intelligence” alien to what most people would understand.

            technically a calculator can be seen as a very basic computational intelligence, although very limited in capability or purpose outside of a greater system. LLMs mirror the stochastic word generation element of our intelligence, and a lot of weird neat amazing things that come with the particular type of intelligent system that we’ve created, but it definitely lacks much of what would be needed to mirror our own brand of intelligence. it’s so alien in function, yet so capable at representing information that we are used to, it is almost impossible not to anthropomorphise.

            i’m currently excited by the work being done in understanding our own intelligence as well

            but how would you represent a function so complex and abstracted as this in a system like GPT? if qualia is an emergent experience developed through evolution reliant on the particular structure and makeup of our brains, you would need more than the aforementioned system at any level of compute. while i don’t think the principle function would be impossible to emulate, i don’t think it’d come about by upscaling GPT models. we will develop other facsimiles more aligned with the specific intentions we have for the tool the intelligence is designed and directed to be. i think we can sculpt some useful forms of intelligence out of upscaled and altered generative models, although yann lecun might disagree. either way, there’s still a fair way to go, and a lot of really neat developments to expect in the near future. (we just have to make sure the gains aren’t hoarded like every other technological gain of the past half century)

    • Gsus4@feddit.nlOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Yes, he probably does not have the right technical background for this, but he has a broader view of the impact of technology on society (e.g. Lex Friedman does have the technical backgound, but he never seems to challenge the process in which innovation seems to herd people into bandwagons), but the discussion is meta enough to cover common trends in tech and the hype cycle that overlay legitimate research.

      It’s a flawed interview (they spend too long venting about Zuckerberg) but they make some good points. My other complaint is that I would have preferred to have read about the main points in a page in 10 minutes instead of 1h, but if you’re doing dishes it’s enjoyable :)

    • Gsus4@feddit.nlOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      You mean Adam Conover’s interviews? Me neither, this is completely new to me too, he’s like a literate Joe Rogan and less formal Lex Friedman :)

        • Gsus4@feddit.nlOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          I didn’t know Dan, but when I heard he actually studied theology, I finally understood his meticulous understanding and debunking of the crypto delusion, NFTs and tech hypecycles in general, now I’m watching “line go up” :)

          • driving_crooner
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            All his videos are great. I would recommend watching them in order from “Cooking food on the internet for fun and profit” to the newest ones (can skip the one about Christmas movies, that one is not that good).