newfocogi 3 days ago

I'm enthusiastic about BitNet and the potential of low-bit LLMs - the papers show impressive perplexity scores matching full-precision models while drastically reducing compute and memory requirements. What's puzzling is we're not seeing any major providers announce plans to leverage this for their flagship models, despite the clear efficiency gains that could theoretically enable much larger architectures. I suspect there might be some hidden engineering challenges around specialized hardware requirements or training stability that aren't fully captured in the academic results, but would love insights from anyone closer to production deployment of these techniques.

  • swfsql 3 days ago

    I think that since training must happen on a non-bitnet architecture, tuning towards bitnet is always a downgrade on it's capabilities, so they're not really interested in it. But maybe they could be if they'd offer cheaper plans, since it's efficiency is relatively good.

    I think the real market for this is for local inference.

  • strangescript 3 days ago

    I find it a little confusing as well. I wonder if its because so many of these companies have went all in on the "traditional" approach that deviating now seems like a big shift?

  • waynenilsen 3 days ago

    I suppose hardware support would be very helpful, new instructions for bitpacked operations?

  • danielmarkbruce 3 days ago

    People are almost certainly working on it. The people who are actually serious and think about things like this are less likely to just spout out "WE ARE BUILDING A CHIP OPTIMIZED FOR 1-BIT" or "WE ARE TRAINING A MODEL USING 1-BIT" etc, before actually being quite sure they can make it work at the required scale. It's still pretty researchy.

zamadatix 3 days ago

For anyone that hasn't read the previous papers before the "1.58-bit" part comes from using 3 values (-1, 0, 1) and log2[3]=1.58...

sheerun 44 minutes ago

When will AIs learn to no bla bla bla by default

trebligdivad 3 days ago

Has some one made an FPGA or ASIC implementation yet? It feels like it should be easy (and people would snap up for inference).

alkh 3 days ago

Sorry for a stupid question but to clarify, even though it is a 1-bit model, it is supposed to be working with any types of embeddings, even taken from larger LLMs(in their example, they use HF1BitLLM/Llama3-8B-1.58-100B-tokens). I.e. it doesn't have an embedding layer built-in and relies on embedding provided separately?

wwwtyro 3 days ago

Can anyone help me understand how this works without special bitnet precision-specific hardware? Is special hardware unnecessary? Maybe it just doesn't reach the full bitnet potential without it? Or maybe it does, with some fancy tricks? Thanks!

  • hansvm 3 days ago

    I haven't checked this one out yet, but a common trick is using combinations of instructions and data invariants allowing you to work in "lanes".

    The easiest example is xor, which can trivially be interpreted as either xoring one large integer or xoring a vector of smaller integers.

    Take a look at the SWAR example here [0] as a pretty common/easy example of that technique being good for something in the real world.

    Dedicated hardware is almost always better, but you can still get major improvements with a little elbow grease.

    [0] https://nimrod.blog/posts/algorithms-behind-popcount/

    • 15155 3 days ago

      This is extremely easy to implement in-FPGA.

  • summerlight 3 days ago

    The major benefit would be its significant decrease in memory consumption, rather than the compute itself. The major bottleneck of the current LLM infra is typically memory bandwidth and that's the reason why those chip industries are going crazy on HBM. Surely compute optimization helps but this is useful even without any hardware changes.

    • az226 3 days ago

      Inference speeds go brrrr as well.

  • eightysixfour 3 days ago

    While fancy hardware would make it faster, what you are comparing it to is a bunch of floating point and large number multiplication. I believe in this case they just use a look up table:

    If one value is 0, it is 0.

    If the signs are different, it is -1.

    If the signs are the same, it is 1.

    I’m sure those can be done with relatively few instructions using far less power hungry hardware.

Scene_Cast2 3 days ago

Neat. Would anyone know where the SDPA kernel equivalent is? I poked around the repo, but only saw some form of quantization code with vectorized intrinsics.

delegate 3 days ago

I assume it is not as powerful at some tasks than full sized model, so what would one use this model for ?

faragon 3 days ago

I'm glad Microsoft uses Bash in the example, instead of their own Windows shells. As a user I would like having something like "Git Bash" for Windows built in the system, as default shell.

  • not_a_bot_4sho 3 days ago

    WSL is where it's at today. It's not quite what you're asking for, as it is a separate virtual OS, but the integration is so tight that it feels like you're using your favorite shell natively in Windows.

    • diggan 3 days ago

      > integration is so tight that it feels like you're using your favorite shell natively in Windows

      WSL1 certainly felt that way, WSL2 just feels like any other virtualization manager and basically works the same. Not sure why people sings the praise of WSL2, I gave it a serious try for months but there is a seemingly endless list of compatibility issues which I never had with VMWare or VirtualBox, so I just went back to those instead and the experience is the same more or less.

      • throwaway314155 3 days ago

        Probably because it has relatively painless GPU sharing with pass through. As far as I know that sort of feature requires a hypervisor-level VM, which is not something you get with VirtualBox.

        • diggan 3 days ago

          Someone correct me if I'm wrong, but I think you can use a KVM or QEMU backend for VirtualBox and that way get GPU pass-through. Probably not out of the box though.

          • EgoIncarnate 3 days ago

            The WSL2 GPU passthrough is more like a virtual GPU than KVM style device passthrough. I believe it's effectively a device specific linux userland driver to device specific windows kernel driver with a linux kernel shim bridging the too. If I recall correctly, the linux userland drivers are actually provided by the windows driver.

  • aithrowawaycomm 2 days ago

    They are using Windows shells, just not on Macs. This is what the caption for the video says:

    > A demo of bitnet.cpp running a BitNet b1.58 3B model on Apple M2:

    And this is in their Windows build instructions:

    > Important! If you are using Windows, please remember to always use a Developer Command Prompt / PowerShell for VS2022 for the following commands

  • layer8 3 days ago

    Just install Cygwin.

    Not sure what you mean by “default shell”. The default shell on Windows is this: https://en.wikipedia.org/wiki/Windows_shell. I don’t suppose you mean booting into Bash. Windows doesn’t have any other notion of a default shell.

    • faragon 3 days ago

      I used Cygwin for more than a decade. I prefer Git Bash (msys-based).

      • hhejrkrn 3 days ago

        I thought msysgit was also cygein based

ein0p 3 days ago

1.58bpw is not “1 bit”.

lostmsu 3 days ago

No GPU inference support?

  • diggan 3 days ago

    > that support fast and lossless inference of 1.58-bit models on CPU (with NPU and GPU support coming next).