Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    5 months ago

    Pretty sure they’ll be given insight into the roadmap for that price, and be able to place speculative orders on upcoming generations.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 months ago

        Of course they do, but my point was that I doubt Meta is locked into this generation.

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            5 months ago

            “spend billions” does not equal “hand over cash and take home GPUs”. It’ll mean a contract worth that amount with delivery terms defined over time. Even over the course of a year there’s likely to be newer product than Lovelace.

            • Rapidcreek@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 months ago

              When you get product you pay for it. Spending means paying for it. You may have a contract for future product, but you don’t pay for the future product in advance as SOX rules kick in. Commonly, a chip development cycle can be at least 10 months.