Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bevekspldnw
on April 11, 2024
|
parent
|
context
|
favorite
| on:
Mistral AI Launches New 8x22B MOE Model
There is a user called The Bloke on hugging face- they release pre quantized models pretty soon after the full size drop. Just watch their page and pray you can fit the 4 bit in your GPU.
I’m sure they are already working on it.
nathanasmith
on April 11, 2024
|
next
[–]
TheBloke stopped uploading in January. There are others that have stepped up though.
bevekspldnw
on April 11, 2024
|
parent
|
next
[–]
Oh really? Who else should I be looking at?
That person is a hero, super bummed!
fzzzy
on April 11, 2024
|
root
|
parent
|
next
[–]
TheBloke's grant ran out.
MPSimmons
on April 11, 2024
|
prev
[–]
I think 4b for this is support to be over 70GB, so definitely still heavy hardware.
bevekspldnw
on April 11, 2024
|
parent
[–]
Fucking hell, my A6000 is shy of that and I can’t reasonably justify picking up a second.
Consider applying for YC's Summer 2025 batch! Applications are open till May 13
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
I’m sure they are already working on it.