Wilshire@lemmy.world to Technology@lemmy.worldEnglish · 5 months agoThe first GPT-4-class AI model anyone can download has arrived: Llama 405Barstechnica.comexternal-linkmessage-square61fedilinkarrow-up1214arrow-down117cross-posted to: tech@programming.dev
arrow-up1197arrow-down1external-linkThe first GPT-4-class AI model anyone can download has arrived: Llama 405Barstechnica.comWilshire@lemmy.world to Technology@lemmy.worldEnglish · 5 months agomessage-square61fedilinkcross-posted to: tech@programming.dev
minus-squareraldone01@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-25 months agoMy specs because you asked: CPU: Intel(R) Xeon(R) E5-2699 v3 (72) @ 3.60 GHz GPU 1: NVIDIA Tesla P40 [Discrete] GPU 2: NVIDIA Tesla P40 [Discrete] GPU 3: Matrox Electronics Systems Ltd. MGA G200EH Memory: 66.75 GiB / 251.75 GiB (27%) Swap: 75.50 MiB / 40.00 GiB (0%)
minus-squaresunzu@kbin.runlinkfedilinkarrow-up1·5 months agook this is a server. 48gb cards and 67gb ram? for model alone?
minus-squareraldone01@lemmy.worldlinkfedilinkEnglisharrow-up2·5 months agoEach card has 24GB so 48GB vram total. I use ollama it fills whatever vrams is available on both cards and runs the rest on the CPU cores.
My specs because you asked:
CPU: Intel(R) Xeon(R) E5-2699 v3 (72) @ 3.60 GHz GPU 1: NVIDIA Tesla P40 [Discrete] GPU 2: NVIDIA Tesla P40 [Discrete] GPU 3: Matrox Electronics Systems Ltd. MGA G200EH Memory: 66.75 GiB / 251.75 GiB (27%) Swap: 75.50 MiB / 40.00 GiB (0%)
ok this is a server. 48gb cards and 67gb ram? for model alone?
Each card has 24GB so 48GB vram total. I use ollama it fills whatever vrams is available on both cards and runs the rest on the CPU cores.