So you would have to pair this with a switch that not only does VLANs but also somehow does your NAT for you.
So you would have to pair this with a switch that not only does VLANs but also somehow does your NAT for you.
Usually the routers you install OpenWRT on are really a CPU with one port to a VLAN-capable switch, and the port labeled WAN on the device is just VLAN’d separately by default. One cool thing OpenWRT lets you do on “normal” hardware is change the VLAN settings on the switch ports which are not accessible under stock firmware.
But if they are shipping “just” the router piece and making people go get their own VLAN-capable switch, I’m not sure what hardware exactly they expect people to use? And I’m not sure what being connected to the switch over one real 2.5G cable is going to do to LAN/WAN throughput, vs. how a “normal” router ties the CPU into the switch through means not known to mortal minds. Maybe it is just as good, maybe it is a huge bottleneck. It is definitely going to add cost over the $89 sticker price.
But if most people are just going to run fiber modem straight to WiFi, maybe this is the right config actually?
I don’t think that’s what accepting harmful interference means. It means more like, if there is noise in the channel, the device won’t just up its own power to clobber the noise, even if not doing that will somehow break it or otherwise make it not work right. It doesn’t mean you have to build the device so that some kinds of interference will cause it to break.
I think there are consumer-grade GPUs that can run this on a single card with enough quantization. Or if you want to run it on CPU you can buy and plug in enough DIMMs if you have an only somewhat large amount of money.
Looks like it has 32B in the name, so enough RAM to hold 32 billion weights plus activations (current values for the layer being run right now, which I think should be less than a gigabyte). It is probably made of 16 bit floats to start with, so something like 64 gigabytes, but if you start quantizing it to cram more weights into fewer bits, you can go down to like 4 bits per weight, or more like 16 gigabytes of memory to run (a slightly worse version of) the model.
If you’re good enough at writing to communicate all the information you need to something that is more different from you than any other human, why do you feel like you aren’t the best at writing?
That’s not allowed on Wikipedia, you have to use verifiable information from reliable secondary sources instead.