|
@hardmaru | |||||
|
“AMD is aiming to disrupt 4K gaming, and build a Radeon GPU line-up to take on Nvidia in a similar way to how the firm has ramped up Ryzen processors to beat out Intel”
What about OpenCL and deep learning? @amd techradar.com/news/amd-recko…
|
||||||
|
||||||
|
Jonathan Fly 👾
@jonathanfly
|
2. velj |
|
Every cool new project is tested and developed with NVIDIA, or sometimes TPUs. AMD would need to step up so much, form a team to port popular projects like GPT-2, StyleGAN, etc, really get out there. If they're doing it I'm not seeing it, while NVIDIA is constantly engaged.
|
||
|
|
||
|
lab member 001
@l4rz
|
2. velj |
|
gpt-2 works fine on amd. stylegan2 does not
|
||
|
|
||
|
choongng
@choongng
|
2. velj |
|
I’m pretty sure AMD is deliberately ceding deep learning and GPU compute to NVIDIA so they can focus on winning server CPU and graphics markets. They are much smaller than Intel+NVIDIA so I’m not sure they’re wrong but it sure is disappointing.
|
||
|
|
||
|
Timothy Liu
@timothy_lkh_
|
2. velj |
|
They need to deliver the software alongside ORNL Frontier (~2021). Effectively, US Govt has given them a significant cash boost to jump-start development. That said, not infallible (see original Aurora which failed to delivery in 2018)
|
||
|
|
||
|
Timothy Liu
@timothy_lkh_
|
2. velj |
|
AMD’s strategy seems to be Vega+ROCm to tackle compute workloads. So far, ROCm support hasn’t made it to any Navi cards, and it seems upcoming supercomputers are all Vega. And they’re not done building up ROCm software to match CUDA in features/stability/performance
|
||
|
|
||
|
Maxime Lenormand
@MaxLenormand
|
2. velj |
|
As much as I'm all for competition and having multiple companies in one sector, I feel like Nvidia are taking such a huge step ahead with CUDA and now Rapids the hardware just isn't enough.
But I think a lot of us would love higher vRAM consumer GPUs
|
||
|
|
||
|
Quentin Leon 🧢
@Qwolfblg
|
2. velj |
|
Was wondering about this as well.
AMD's Threadripper line of CPUs is massive for high-volume data pre-processing, and would love to pair that with an AMD GPU one day, but as it stands the performance and library support for DL tasks just aren't there yet.
|
||
|
|
||
|
Mutiny
@MutinyFo
|
2. velj |
|
|
||
|
Han
@HanchungLee
|
2. velj |
|
No MacOS support planned, which implies no cuDNN equivalent support.
On the positive side, $AMD pricing pressured $NVDA mainstream GPUs.
github.com/RadeonOpenComp…
|
||
|
|
||
|
Igor Moura
@igormp
|
2. velj |
|
When I got into ML and had to give away my AMD gpus in order to buy one from nvidia just to speed up the stuff I was learning made me really sad
|
||
|
|
||