![loading a custom model by GPU uses RAM but loading on CPU doesn't. how can I not use RAM when I load by GPU as well? · Issue #11352 · ultralytics/yolov5 · GitHub loading a custom model by GPU uses RAM but loading on CPU doesn't. how can I not use RAM when I load by GPU as well? · Issue #11352 · ultralytics/yolov5 · GitHub](https://user-images.githubusercontent.com/77414512/231899227-39e22288-44ce-4ee8-a87c-043970349c40.png)
loading a custom model by GPU uses RAM but loading on CPU doesn't. how can I not use RAM when I load by GPU as well? · Issue #11352 · ultralytics/yolov5 · GitHub
![cuda - Can CPU-process write to memory(UVA) in GPU-RAM allocated by other CPU-process? - Stack Overflow cuda - Can CPU-process write to memory(UVA) in GPU-RAM allocated by other CPU-process? - Stack Overflow](https://i.stack.imgur.com/92Squ.jpg)
cuda - Can CPU-process write to memory(UVA) in GPU-RAM allocated by other CPU-process? - Stack Overflow
![Nvidia-SMI for Mixtral-8x7B-Instruct-v0.1 in case anyone wonders how much VRAM it sucks up (90636MiB) so you need 91GB of RAM : r/LocalLLaMA Nvidia-SMI for Mixtral-8x7B-Instruct-v0.1 in case anyone wonders how much VRAM it sucks up (90636MiB) so you need 91GB of RAM : r/LocalLLaMA](https://preview.redd.it/nvidia-smi-for-mixtral-8x7b-instruct-v0-1-in-case-anyone-v0-24lwrshks58c1.png?auto=webp&s=67a5531f40dfd9bcb3c91ff143af27216089de91)