Rendered at 00:09:31 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
randkyp 23 hours ago [-]
Neat! While the physicality of having the CD spin while running inference is undeniably cool, I wonder if you could run larger models at higher speeds through the PS2 HDD accessory/Memory Card Micro SD adapter/the PS2's USB port.
I doubt the VUs can help with inference given their small scratchpad sizes and instruction set though, haha.
LocalH 45 minutes ago [-]
The network port is faster than the CDVD drive or any of those accessories with the exception of the HDD. The ethernet PHY links at 100Mbit, but the processors inside the PS2 are not really capable of pushing that speed, the best I ever saw when installing games over the network with a hyper-optimized IP stack (on the IOP, IIRC) was something like 5MiB/s.
The HDD is the fastest form of I/O one can use on the PS2. It might not even need to be modified - depending on how well it's coded, it may be possible to run this software via Open PS2 Loader, which will replace CDVDMAN with a custom version that will access USB/ETH/HDD (and as mentioned in sibling comments, USB on the PS2 is version 1.1 and is much slower than even the CDVD drive).
Both network and HDD will also greatly minimize the cost of seeking the CDVD, which may be an issue depending on how the CD is laid out. CD access is up to 24x, DVD-ROM access is at 4x. DVD is thus slightly faster, and can be further increased by pushing the used data to the edge of the disc via a dummy file (traditionally, developers and game modders used Sony's own CD/DVD Generator software to determine the order of files being added to the disc, thus allowing the boot files to come first, followed by the dummy file, then any data files that need the extra speed).
accrual 21 hours ago [-]
The PS2's USB port is limited to 1.1 speeds so unfortunately it's much slower than the CD interface. The phat models have an internal IDE port that is trivally converted to SATA though, and is plenty fast with an SSD!
mghackerlady 17 hours ago [-]
I'm excited for the PS2 SDK. Currently there isn't a lot in that space that won't get you sued
pjmlp 15 hours ago [-]
Some of us have it legally via PS2Linux, naturally distribution isn't allowed.
mghackerlady 5 hours ago [-]
Right but that's just developing for an old version of linux running on weak hardware, you can't do any of the crazy stuff the EE is really capable of. Plus, development is still hard for newcomers. I wish RenderWare was open sourced when it died instead of bitrotting
pooparse 23 hours ago [-]
IIRC the EE had some interesting hardware with vector units. Were these of any use/benefit here?
keremimo 16 hours ago [-]
My goodness... Is nothing sacred anymore?
mememememememo 4 days ago [-]
How many tok/hr?
Real_Egor 9 hours ago [-]
Now you must teach it how to play Multiplayer AoE:II!
maltyxxx 13 hours ago [-]
[dead]
SilentEditor 4 days ago [-]
Love this project. The CD streaming trick is such a smart constraint hack, and honestly the best part is you trained the model for the hardware instead of forcing a desktop recipe onto PS2.
Curious about 2 things if you can share:
whats your per-token latency on real hardware
how much quality loss came from PSNT quantization vs fp16 baseline
Either way this is peak hacker energy, shipping on actual hardware makes it 10x cooler.
xaskasdf 4 days ago [-]
It didn't had any quality loss, since the PSNT as quantization it's mainly to convert the model over the console constraints (you can convert any model you want, even when i trained a model for this hw); it's q8 quantization, so quality loss is negligible for these sizes. For the speed, I will fix the Tok/sec count since now drops 0 always for showing measures
PS: Thank you! And forgot to mention PSNT also supports bitnet models, they work like crap tho
SilentEditor 3 days ago [-]
Thats super helpful, thanks for the details. Makes sense now that PSNT is more of a transport/runtime format for the PS2 constraints than a quality hack.
Very cool that it supports bitnet too even if results are rough right now, feels like theres a lot of room to tune there over time.
when you do fix tok/sec, are you planning to post per-stage timings too (tokenizer, weight stream, matmul, samppling)? would be awesome to see where the biggest bottleneck is on real hw
SachitRafa 4 days ago [-]
The CD-ROM streaming approach is the real insight here, keeping only activations and KV cache in RAM and streaming weights one matrix at a time sidesteps the 32MB constraint entirely. It's essentially the same trick modern edge inference does with flash storage, just on hardware from 2000.
Curious about the latency profile, with CD-ROM read speeds around 1.6 MB/s on PS2, the 77MB SmolLM2 model being too slow makes sense, but how does the 10MB brandon-tiny feel in practice? Are you getting tokens per minute or more like tokens per several seconds?
Also interested in the custom PSNT format decision, was the main motivation the PS2's MIPS alignment constraints, or was there something about the existing GGUF/llama.c formats that made them impractical to parse on the Emotion Engine?
I doubt the VUs can help with inference given their small scratchpad sizes and instruction set though, haha.
The HDD is the fastest form of I/O one can use on the PS2. It might not even need to be modified - depending on how well it's coded, it may be possible to run this software via Open PS2 Loader, which will replace CDVDMAN with a custom version that will access USB/ETH/HDD (and as mentioned in sibling comments, USB on the PS2 is version 1.1 and is much slower than even the CDVD drive).
Both network and HDD will also greatly minimize the cost of seeking the CDVD, which may be an issue depending on how the CD is laid out. CD access is up to 24x, DVD-ROM access is at 4x. DVD is thus slightly faster, and can be further increased by pushing the used data to the edge of the disc via a dummy file (traditionally, developers and game modders used Sony's own CD/DVD Generator software to determine the order of files being added to the disc, thus allowing the boot files to come first, followed by the dummy file, then any data files that need the extra speed).
Curious about 2 things if you can share:
whats your per-token latency on real hardware how much quality loss came from PSNT quantization vs fp16 baseline Either way this is peak hacker energy, shipping on actual hardware makes it 10x cooler.
PS: Thank you! And forgot to mention PSNT also supports bitnet models, they work like crap tho
Very cool that it supports bitnet too even if results are rough right now, feels like theres a lot of room to tune there over time. when you do fix tok/sec, are you planning to post per-stage timings too (tokenizer, weight stream, matmul, samppling)? would be awesome to see where the biggest bottleneck is on real hw