Mixing 1Gbps, 2.5Gbps and 10Gbps has sort of been a nightmare for me on Mikrotik at least. My WAN port is 2.5Gbps and you would expect that I get full speed 112MB/s Internet on the 1Gbps LAN ports, but that only happens if I force the WAN port to negotiate at 1Gbps. If I leave it at 2.5Gbps, then I get something like 50MB/s instead, and it oscillates +- >20MB/s during the life of that TCP connection.
I don't think it's entirely a Mikrotik issue as I believe this is just a physics thing - you're firing off one port at a certain speed so the buffer of the other port fills up too quickly unless the faster one slows down (resulting in excessive packet loss.) But it looks like Mikrotik has the most complaints about this.
The solution appears to be to enable Flow Control, but it's never clear in which direction or which port it needs to be enabled on, and I haven't really had any success with any combination.
It's a MikroTik issue in the end, their buffers are too simply too small for the interface speeds they support. It's part of being budget chips. Flow control is a poor solution as it blocks all flows whenever one flow makes things busy, best is just a properly scaled chip.
I swapped a more expensive "low end" (for the family) Broadcom Trident 3 based switch in in place of 2 different 10G MikroTik boxes and the mixed speed transfers are now behaving exactly like one would expect without any flow control.
Yeah, I did get the feeling that Flow Control (while it may alleviate some issues and/or fix others entirely) is just a hack. If what you're saying is true regarding the buffer sizes, I don't understand why they would knowingly make that choice as it causes poor performance which is not what their devices should be known for. It also completely negates the whole point of one 2.5Gbps port in the first place!
I'm curious though: Going by your statement, wouldn't even a large buffer fill eventually? And by eventually, in computer terms, that might be milliseconds/a few seconds if we're talking transfer speeds hundreds of megabytes or gigabytes per second. How big is a buffer supposed to be exactly?
Packet buffers get a bit complicated but the hand wavy explanation is you want enough buffer to be able to smooth over situations like mixed speed goes-intos and goes-outtas or jitter from small bursty streams but not too much buffer you end up delaying bandwidth/loss discovery algorithms or holding onto packets already considered stale more than you are actually helping. In the end for local switching you generally want a very small buffer but not necessarily just the smallest buffer you can find.
There are endless rabbit holes of details though. You want a buffer with at least 1 hardware queue per network queue (in advanced situations you may even want a hardware queue set per flow) and you want these queues to be intelligent - priority queues, weighted round robin queues, best effort queues. In each of these queues you also probably want things like WRED which counterintuitively helps avoid congestion by starting to drop packets _before_ the buffer is full. You also want the buffer to be as "flat" among interfaces as possible, e.g. if the switch has 12 interfaces it's best it's all 1 line rate ASIC with 1 shared buffer not an 8 port + 4 port with interconnect and it's own set of internal QoS. For non local things latency also starts to become as big a factor in buffer sizes as interface speeds.
I forget what I ended up finding the buffer size of my MikroTik switch to be but I remember thinking it should have been 4x-8x the size - whatever it was it was truly tiny and didn't seem to be operating well.
As for why MikroTik makes things with known too small buffers... it's cheaper than proper high speed interfaces but more performant than just using low speed interfaces. Sure, it's not a perfect 2.5 Gbps solid but for the price it'd take for them to do that "right" they would be able to do 10 Gbps ports "wrong" instead. I still use my MikroTiks though I just make sure not to mix interface speeds or if I do it's with something on the side where I don't really care about having perfect performance to it just a generally fast connection when I do use it.
Buffer counts and buffer sizing is also a notoriously cut-throat game in vendor marketing, almost on par with (mis-)use of the term "wire speed".
This is not an easy/cheap problem to solve, and I fully agree with the conclusion: Mikrotik hardware is good up to the point where the price/performance curve falls the wrong way in a particular situation.
I also get a lot of use from Mikrotik hardware, particularly the passively-cooled models where performance isn't as important as price + silence.
I had to fight this battle in 1998 with a DSL SoHo modem/NAT router (single box) with a 10Mb/S Ethernet jack routing through an ADSL modem with a maximum speed of 8Mb/S Down and 1Mb/S Up. The 'Up' path was killer with the PC trying to stream 10Mb/S Ethernet into a NAT router that could (at best) dump 1Mb/S up through DSL.
So a 10/1 ratio.
And yes, make your buffers too big (we had relatively tons of memory) and all you get is weird latency cycles as the higher level protocols over-send, then fall back. Make your buffers to small, and the total throughput falls off, bu the latency is far more 'normal'. We ended up buffer scaling based on what the upstream connection really achieved, so larger buffers for a 1Mb/S link than for a 128Kb/S link. We would start the router at 1Mb/S buffer sizing, and scale down if the DSL link throughput was less than that.
At the time we were working through that, a number of people would scream 'just give it more buffer', not understanding how badly that broke something like TCP.
I've replaced several MikroTik 's with something more expensive several times due to just being too slow under load , it's just how they are. You need to spend real $ to get real speed, even for gigabit connections. I know people to use them extensively who say the same.
Isn't this the kind of problem that ECN is supposed to solve? Unless the buffers are very small compared to the port throughput, the upper layers should be able to handle it.
ECN purely allows the sending CCA to modulate sending rate before packet loss. Buffers filling up does not change, what changes is how early the sender reduces their window sizes and whether packet loss occurs before then.
At 2.5 Gbps having 5 ms of buffer requires >1 MB for that single hardware queue which was either at or over the available buffer for the entire switch IIRC.
Having too large of a buffer, particularly on gigabit class software routers/nat points which use CPU+RAM instead of expensive hardware, is definitely the more common problem. There it's too easy for the trap "I have 512 MB of RAM, why shouldn't I let as much as is available be used as buffer, it's helping" which then makes you wish you had 4 MB of buffer instead.
I noted elsewhere, I fought this battle once. People outside the development team kept saying "it needs more buffers", but they were not looking at the network analyzer, and they weren't looking at the internal states. As buffers became excessive, it started with weird latency bouncing, and it just got worse and worse as you added buffers. We had a network impairment simulator. It was fun to turn that knobs on that and see how that interacted with packet loss buried in your buffer.
I suspect as these ports get faster, flow control is going to be more important. I had a network issue on my local homelab that required enabling flow control to work around. This was particularly annoying on the UniFi interface as I had to drop back to a legacy interface to even find it.
We're probably going to be in for a bit of pain until everything can reliably function at 10Gb/s
I've had this issue internally too, and found that only Mikrotik's own S+RJ10 transceivers handle the 2.5/5 Gbps speeds gracefully on their switches. Others sometimes negotiate at 10 Gbps and you get like 1.2 Gbps maximum over the 2.5G interfaces.
It's kind of infuriating, to the point I bought a separate dedicated 2.5G switch with 10G uplinks and plugged that over to the Mikrotik with a DAC.
I have a 2.5GBP/s WAN (as well as a second 1GB/s WAN) going into UDM-Pro, with a trunk link going at 10GB/second to a main switch via DAC, and a second 10GB/s going to a 10GB/s switch in my office. That just works (at least since Unifi solidified the firmware for dealing with funky fiber PPPoE situations).
I'm still struggling to get my Ubiquiti devices to negotiate 10Gbps between themselves, hah. I suspect my EdgeSwitch is just too old to reliably hit that mark.
You don't really want flow control here. Flow control on the 1G interface isn't needed, your PC can manage line rate packets (I assume); asking the WAN to stop when the 1G is full will cause trouble for flows to other ports.
What you want is a small amount of buffer, but when the WAN is sending packets too fast, they should be dropped and TCP will figure it out. But maybe the buffers are too big and you get latency spikes. Or maybe something else is funky when packets are getting dropped?
Flow control usually causes more issues as things will get delayed or dropped in bunches.
I wonder, what is stopping people from buying a x64 PC with a good CPU, slapping multiple ethernet cards over pcie (which support amazing speeds, good enough for 10G) and installing openwrt/pfsense on it?
Cost, both from power consumption and purchase. Hard to compete in those fronts with a mikrotik or ubiquity router doing 1Gb/s through hardware offload with a tiny MIPS or ARM chip.
One thing I've seen some people run into is if their ISP insists on PPPoE, inbound packets will tend to stick to a single RX queue, which lends to single threaded handling of that traffic, and it can too much for one core, especially if you get a lower power cpu. It might be possible to convince the network card to look a bit farther into the packet to hash packets to different queues, but at least it doesn't happen out of the box.
Somewhere around HN there is a guy that did just this for the 25G internet to his home, full talk with slide deck and all. I think he wrote some custom Go code to simplify running it as well.
In general though unless you need >10G you'll come behind on (good) COTS offerings in price and performance. Particularly if you need features like NAT or firewalling where software starts adding latency or performance cliffs at certain intervals while things like a low end Fortigate have high levels of hardware offload.
They should always auto negotiate down to whatever the slowest direct link in the chain is. So like if you connect a 2.5Gbs port directly to a 1Gbps switch then the driver should negotiate it down to 1Gbs.
> I wonder why there was never a standardized PCIE 'card' format? Seems like the perfect thing for extra storage with NVME being so common these days.
The best thing about standards is that there are so many of them to choose from. The closest analogue is the ExpressCard (https://en.wikipedia.org/wiki/ExpressCard), but there's also the internal-only M.2 (https://en.wikipedia.org/wiki/M.2), which is the most common non-server format for NVMe; mini-PCIe (similar to M.2, used mostly for wireless); SD express (SD cards, but with PCIe); and CFexpress (CompactFlash cards, but with PCIe).
I have a thinkpad p71 with an Expresscard x1 slot and can't figure out what to use it for since the laptop has so much existing IO out of the box.
It would have been nice if framework adopted this existing standard and started making their own express card modules instead of making their own USBC thing that doesn't fit in any other laptop.
I do have a vague recollection of having a lot of "fun" running Linux on an old laptop that for some reason had a PCMCIA WiFi card(I think the onboard one might have been dead or something? Or it was so damn old it never had one).
Harder times, sure. But also simpler, because at least the mainstream distros back then weren't so damn convoluted and it could never take me more than 30 seconds to change the DNS servers permanently. I set up NextDNS on my Pop OS laptop recently and it took me 2 hours of dicking around with systemd.
Starting to yearn back to Arch Linux again... At least with that I actually know what I'm running under the hood...
Wifi came about around the time that every laptop, even small ones, had PCMCIA/cardbus slots and USB was fairly new as well, being used mainly for keyboards, mice, and joysticks. I remember having to buy a PCI USB card for my workstation because the motherboard didn't come with any USB ports even though it was less than a year old.
Oh my gosh, no! Those of us who needed it did like it a lot, and PC Cards before it. But PCMCIA disbanded in 2009, ExpressCard 2.0 failed to catch fire, and USB got "good enough". And now it's apparently acceptable to parade around one's dongles in public.
Many PCMCIA/ExpressCard solutions also used dongles. Dongles are unsightly but USB-C/Thunderbolt with a dongle can really replace anything that an ExpressCard could do, and more.
Basically everything added to TB over its various revisions are things EC can't do because EC was discontinued. 12.5x the bandwidth? Sharing a port with USB? USB-PD out of the same connector?
I absolutely loved expresscard. So many crazy projects from the last decade wouldn't have been possible without it. I had an "eGPU" on a laptop back in 2012 by hooking up a riser card to an expresscard slot [1]. The image quality was atrocious and the stuttering almost gave me a stroke, but it was still neat to use a desktop video card with a laptop.
The best part about EC was that there was no "handshakes" or any other software layer nonsense like you get with Thunderbolt, it was just plain old PCIe and it just worked. Every device you plugged in was fully "native" to the system and didn't require any screwy stuff to work right. SAS controllers just worked. 4x network cards just worked. Serial cards just worked.
These days, my old Dell workstation is on its last legs, it's looking like EC is a dead end. It's a shame, because I doubt we'll ever see that level of simplicity for connecting random devices to our computers again. Yes, the connector sucked and the form factor was awkward, but it really had a special place to me.
Currently rocking an ExpressCard to 3x USB-3 adapter in my 2007 Core 2 Duo laptop, with my laptop motherboard modified to supply 5V to the two reserved EC pins, and the EC modified to pull 5V from the two pins instead of generating 500mA of 5V using a boost converter. The card works fine on Linux and is noticeably faster than USB 2.0, but the drivers blue-screen on Vista and I have not tested on 7 (the laptop is too old for proper drivers, and today is more of a toy and novelty than a proper tool, but Linux Mint MATE runs excellently still).
And PCMCIA before that. I did like it actually. I used these cards to add wifi to laptops that didn't have it, and later in the ExpressCard days it was a cheap way to add USB 3.0.
Its real replacement is Thunderbolt though! That's basically PCIe.
I really hope that this form factor catches on. It's actually possible, since there's no chicken and egg problem -- this card can be used on any computer. It works better in a Framework laptop, but it's just another dongle on others. Hopefully a big manufacturer notices and starts using it.
Physical compatibility aside, sticking a large rigid adapter supported only by the port into a laptop seems like a great way to damage the port and is still clunkier than a standard dongle with a cable.
With the small caveat that you need enough space around the port; on other computers, it might block other nearby ports, or even not fit if there's any protuberance. Passive USB-C plug to USB-C socket cables (that is, USB-C "extension cords") are AFAIK forbidden by the standard (and ignoring the standard and making one anyway would, as far as I understand, allow for dangerous combinations like putting 5A into a 3A cable), so they cannot be used to workaround this limitation.
But there are active USB-C extension cables, as well as hubs that connect with USB-C to the host and have at least one USB-C port to connect a device. Beyond that, forbidden as passive extension cables may be, there are dozens of brands selling them on Amazon, so they're always an option even if they create a potential misuse hazard.
Nice! Except I see that ethernet is a bit too large to fit the body, so this hangs a bit off of the side.
Is there an advantage to this over an external ethernet to USB-C adapter (e.g. https://www.amazon.com/Anker-Ethernet-Portable-1-Gigabit-Chr...)? These days, I find that when using my laptop at home, a decent USB-C hub/dock is pretty much a pre-requisite given the number of things I need to connect (for me, monitor + mouse + keyboard + webcam + webcam light) and many of these have ethernet adapters built in anyway (which I don't personally use since my WiFi is fast enough).
They are! Unless you'd like it to be a USB A card occasionally. Or and SD reader. Or you would like to buy a laptop that lets you choose exactly what ports you want, with the power to change it any time.
I'm not a fan of having a such a large USB-C hub built into the laptop at all. This adapter shows the weird compromise where there is not enough space inside the laptop for a complicated adapter but a non-trivial amount of space inside the chassis is still being used.
I'd much rather have more battery volume inside the computer with all the USB-C ports directly exposed and then have the option to carry around a USB-C dongle with the extra ports I might need.
Then the Framework just isn't for you. The Framework was designed to be a portable laptop that is also modular and repairable. Apple's design philosophy seems to most closely match yours - only USB-C ports with separate dongles.
Not sure why you assume I don't want a repairable laptop or what some plastic USB-C port extenders have to do with modularity.
Hiding a USB-C hub inside a laptop and calling it "modularity" seems pretty silly.
Maybe if that hub connected to some kind of riser card PCI-E interface and was replaceable I'd take this modularity claim more seriously. At least then I'd be able to replace the whole contraption with NVMe slots, a GPU, 10GigE, etc.
> Maybe if that hub connected to some kind of PCI-E interface and was replaceable I'd take this claim more seriously.
Well congrats, it is connected to PCIe because it's thunderbolt, and there's no hub to need to replace because it has a direct line to the CPU (you can replace the main board if you want).
No you're missing the point. The hub itself is built into the chassis so even if it is thunderbolt, anything non-trivial I would want to connect over that bus doesn't have enough physical space in the computer unless it fits inside that tiny connector design:
I'd rather either have a bank of ports that can be removed as one module and replaced with a single large component or just drop the concept and put more battery in there. These tiny individual modules waste space and don't accomplish much.
So you're saying you don't like the form factor of the expansion slots. Okay, I understand that. But the slots don't connect back to a hub, so it would help if you stopped using the word hub or talking about wanting to replace a supposed hub. It's four independent slots that wire back to the CPU, at least for the high speed wires.
You're talking in circles, the slots, their connectors, and the traces on the board are "a thing to remove" and take up considerable space. I see no point in engaging further with your pedantry trolling.
The traces are taking up roughly zero space. There's nothing to remove except the bit of plastic between the slots.
There is no specific thing that could be removed to satisfy your desire. The difference between removing something and the type of redesign they would have to do isn't pedantry trolling.
The modular ports is probably a good idea, but I also find the chosen module size a bit of a uncomfortable middle ground.
On the one hand, they're too small for some very obvious standard ports you might want on a laptop, like for example an RJ45 port. Or multiple/vertical USB-A ports. And they're too small by a relatively small amount, so if the laptop had aimed more at Thinkpad T size thickness (still not terribly thick) rather than Macbook Air thickness, the modules would have been much more flexible (and maybe we could have had even better keyboard options).
A really significantly sized bay would have been quite interesting too - old T-series used to allow you to add a second battery, a big bay could have allowed breakouts for sensors, fpga add ons, etc. Perhaps allowing a double-wide might have been a good way of addressing both.
On the other hand, the modules are still pretty chunky. You don't want to be using one out of its slot, I don't know what sacrifices were made in terms of space in order to have what in many peoples laptops are going to be pass throughs, but the Framework has a relatively disappointing battery, so perhaps that's an effect of the reserved space.
Could you have one of these slots filled with an usb-c battery? Then at least you keep modularity. But it not add space efficiënt probably as just extending the internal battery.
> Is there an advantage to this over an external ethernet to USB-C adapter
It's more securely attached to your laptop: You don't need to deal with it when you put your laptop in a backpack, or worry about losing it when you disconnect it.
Apple's chips are faster than all ARM competition per watt. The only competitor coming close to AMD Ryzen 6800u has better performance per watt (within 20% of Apple's M2) [0] while Snapdragon 8 Gen 1 [1] has about half.
The best you can get right now are some NXP parts from Solid-Run (bulky form factor for mobile, no GPU), the Nvidia Xavier/Orin series mobile SoCs, or some of the newer Snapdragon devices (I'm not familiar with these.) These (all) support UEFI, too. But they are all vastly more expensive in almost every way vs an Apple Silicon device running Linux, and offer worse performance and less CPU features except Snapdragon, it's true. And all of them including Apple Silicon still have quirks.
Part of might be that -- and this is just an observation, speculation -- nobody seems to be licensing newer core designs in high volumes outside of server-class chips or explicit mobile SKUs. That means there's no volume to drip down to consumer parts. Everything is either a mobile SoC design or a high-margin server SKU, there's no mid-range option for a modern ARMv8.5 core -- which happens to be exactly the kind of device Apple targets with something like the Macbook Air.
The rumor is that Nvidia is aiming to get the Xavier/Orin series devices to have full upstream Linux support (GPU acceleration is a different story and needs Mesa, but may come later I suspect, since both the mobile and desktop GPU drivers are now open.) If that happens I think these would probably be the best alternative options you could get, in potentially mobile form factors -- but they are still much pricier when considering performance.
Nvidia desktop GPU drivers are not open. They recently published some include files that make it a bit easier to communicate with and helps the Noveau projects.
You can't drive a Nvidia GPU with anywhere near the same performance/functionality with an open source driver.
I might be totally wrong, as I haven't looked into the details (and I'm not familiar with nouveau either), but I thought they released more than include files? It seems like this[0] is the full kernel driver source (though only for recent nvidia GPUs). But userspace components are still closed source, so that might be what you are referring to. Though surely the kernel drivers are useful for more than just communicating with nouveau?
Ah, right, full kernel source, but it's really just a shim to user a user space driver. It does help make Nouveau's job easier, but it far from a usable driver.
It does nothing to help nouveau, since their biggest issue is at the firmware level. Nouveau cannot reclock modern cards due to HS mode and the cryptographic shenanigans that locks you into a binary blob signed by Nvidia.
In theory in the future you'll be able to use the GPL kernel driver with Noveau to have a fully open-source driver stack with not great not terrible performance. It's not clear when that future will arrive.
That fits, seems like Nvidia is heading in the same direction AMD did years ago, but is a long ways from nearing the functionality and performance of the binary driver.
I'd wager it's because Linux is a relatively small subset of Framework users. And then ARM would be a very small subset of those users. There's nothing stopping a third party from creating one, but I highly doubt the financials would make sense.
The Framework forum is dominated by Linux users. It makes sense that the forum would skew towards enthusiasts who like to tinker, but I wonder what the actual numbers are. I've speculated that the Framework DIY edition has created more new desktop Linux users than anything else in recent history.
Pre-built Framework laptops only ship with Windows. Last year's model has Windows 10. This year's model has Windows 11. The DIY model ships with either Windows 11 or "None (bring your own).
You could but:
- you need to handle booting - this is relatively easy as they use UEFI or coreboot respectively, both of which work fine. Some others using uboot are harder to support as you need to load the device tree definition manually.
- you need to have the drivers for the hardware - this is hard
Most ARM manufacturers for consumer hardware don't seem to think that working with upstream is a good idea. Intel and AMD actively work with linux developers to provide the drivers, and some other companies provide partial (with closed firmware) or complete documentation so developers can create drivers. On ARM side, some companies are fine while others are refusing to provide documentation: Qualcomm's modems and GPUs and Broadcom's wifi hardware are some examples.
I have heard the microsoft surface ARM notebooks got very good battery life when not doing any x86 emulation - might be worth looking into Linux on those
The new Intel processors are efficient, of course if you start doing compute intensive things they will consume a lot of power, the same can be said for ARM CPUs.
You can always force the new Intel processors in low power mode (by activating only the low power cores, at least with Linux) if you want to save power.
Of course there are the other things a laptop is compose of that consume power.
I don’t think it’s the chip or process exactly. In my experience power management was very poor when using Linux on a laptop and required tinkering to get it inline with Windows. Is suspect the battery life seen with Apple silicon Macs is the OS + hardware.
Maybe Framework is the right company to bring an ARM + Linux laptop with decent performance and battery life.
I've looked, and all I've seen are jailbroken Chromebooks and Win 10 laptops that also need to be jailbroken and are missing drivers for important things like wifi and accelerated graphics.
That is the state of non-server ARM in general on Linux.
It requires mucking with device trees and what not on most SBC's, etc.
Some of them kind of work.
The likely OEM's the SOC for a laptop like this (rockchip, amlogic, etc) are mostly non-helpful. Though rockchip is getting upstream reasonably well now (IE you could probably build a mainline kernel that works).
But honestly, I would expect a linux laptop built with any on-market ARM SOC to be a mess right now.
There is actually one SBC vendor who runs linux in a container under the android kernel because the android kernel has much better ARM hardware support :)
(which is probably right - most of the SOC's you'd put in a laptop are being built mainly for android based set top boxes, tablets, etc)
It would take a couple iterations and dedicated work to get somewhere good.
Basically a Raspberry Pi. Pretty unimpressive specs. My usual process with laptops is to buy something pretty beefy for its time and get a solid 7 - 10 years out of it. User-replaceable batteries helps with this.
I am 100% on board with the repairability, sustainability, and compatibility mission of Framework, but I absolutely hate the way they did these choices of only 4 expansion cards. A Thinkpad T14 Gen 3 [1] is 2mm thicker than the Framework, but has more than enough room for an RJ-45 ethernet port, 2 USB-C ports, 2 USB-A ports, 1 HDMI, a SD card slot, and a smartcard reader (maybe an edge case, but still). That's 8 ports, plus the headphone jack that is built in to both. If I bought a framework, I'd need to carry multiple expansion cards to have those same connectivity options, plus that RJ45 is going to stick out and probably wouldn't be good to put in a bag while attached to the laptop. I see so much hype about Framework, but does this seem wrong to anyone else?
Edit: Since this got a lot of replies, I do get the benefits of an expansion slot. Put one on each side, that should be plenty. But fill the rest of the laptop with the usual set of ports in a motherboard that I can easily repair.
There has always been a trade-off between modularity and size. Each card is effectively a USB-C/Thunderbolt dongle, so they need to be big enough to fit a reasonable amount of electronics (like 1TB of NAND, or an Ethernet IC) plus the shell and connectors. If your needs are basic (just USB, 3.5mm, 1x HDMI etc.) then you should get the Thinkpad. But if you want modularity for special cases (e.g. triple HDMI, double Ethernet, or triple MicroSD readers) or if you like to move your ports to alternating sides of your laptop, then the Framework's expansion system is for you. Framework is inherently a compromise between thickness and modularity (also why I assume it uses an internal battery) in a 14" portable laptop. Perhaps a new generation could shoehorn in a non-card USB-C port, although that would break their design rule that all ports support Thunderbolt.
> Perhaps a new generation could shoehorn in a non-card USB-C port, although that would break their design rule that all ports support Thunderbolt.
And that is what frustrates me so much. Sure, have an expansion slot on each side, they seem cool and I get the benefits. But give me more ports in a motherboard I can replace. This didn't seem like a technical decision or one grounded in values of repairability and sustainability, it seems like they made a design decision that is objectively worse.
> But give me more ports in a motherboard I can replace.
They might simply be bandwidth-constrained. Driving 4 individual Thunderbolt 4 channels requires an insane amount of IO bandwidth, more than even the M1 or M2 can provide.
If they added another port, it probably wouldn't be any faster than USB 2.0.
Depends on your power adapter. My bog-standard one I use with my Framework is a Best Buy standard. It is just a little too thick to fit into the recessed USB-C when the adapter is removed.
I'm really surprised that there isn't (yet) a 2-4 port hub option. Or a larger
expansion rail of some sort - it can have limits to it (no thunderbolt or whatever's high bandwidth), that can act as a dock for multiple modules.
Maybe they're hoping a 3rd party does it (it's effectively a regular hub in a specific form factor).
> triple HDMI, double Ethernet, or triple MicroSD readers
Is that really the niche that Framework is targeting? I thought it was meant to be a generally appealing laptop with expansion options like laptops used to have.
How much physical space does 1TB of NAND really take up anyway?
Framework is just making a laptop that is both repairable and modular. These just happen to be use cases for it. I find business laptops like Thinkpads "repairable enough" and am okay with the upsides and downsides of a fixed selection of ports, so I don't own a Framework. But my friend owns one because they would rather have a limited number of very customizable ports - for example, when they need to charge the Framework while connecting to a TV, they can swap around the USB-C and HDMI ports to the side closer to each. And when they need to dock to a bunch of stuff at their desk, a USB-C dock is a more suitable solution. This obviously isn't for everyone, but it satisfies their niche.
Modularization inherently requires much more space than integrated components. So you either need to have a much larger chassis to support the same number of components, or you need to have fewer components for a similar sized chassis.
Would it have been possible to have modularisation, but with internal modules?
I know nothing about industrial design, and no doubt this must have been considered. What would be the trade offs the designers faced? Seems this modular design uses a lot of layers of chassis and space.
Connectors inherently need to reach the outside of the device, and they're all different shapes and sizes. At a minimum you need to reserve as much external space, per module, as you expect the largest module will need to expose.
For devices that don't have external connectors, modular devices are already common, like M.2.
Since they're all full fledged tb4 ports they're likely limited in the number they can put in. Other laptops are just building in a separate USB root for multiple ports off once tb channel.
They could have maybe laid them out in a way that would have made making double-wide modules easier or something though, maybe.
> fill the rest of the laptop with the usual set of ports in a motherboard that I can easily repair.
Where exactly does "easily repair" mean? It all depends on who the end user is. Consider three very different users:
1. Expert: Replacing connectors on motherboards is not particularly hard for someone with experiencing soldering surface mount components.
2. Intermediate: Many laptops have modular barrel connector subcomponents that are easy to replace for anyone who is comfortable opening laptops.
3. Non-technical: Many consumers are uncomfortable opening laptops at all. Many have thrown away laptops with modular batteries.
Because there are lots of different users on different levels of technical ability, creating a "repairable" laptop can mean a lot of different things depending on where you draw the line.
> Many laptops have modular barrel connector subcomponents that are easy to replace for anyone who is comfortable opening laptops.
That's one thing that makes me a bit uncomfortable about the current trend to use only USB-C for charging. At least on my current laptop (and I believe many other models from the same manufacturer), the "traditional" barrel connector is separate from the motherboard, connected to it by a short length of wire, while the USB-C connector is soldered directly to the motherboard and reinforced by a piece of metal. Any mechanical stress to the charging cable (for instance, from tripping on it) will go to the chassis for the barrel connector, but for the USB-C connector it will go directly to the motherboard.
I ended up just buying additional expansion cards and swapping them on demand. I don’t often need HDMI/DP expansion cards so I just keep them around in case I need them, and have extra USB slots for normal days.
Sounds like just another version of dongle hell. It is even worse for ports used less often, in my opinion. Last thing I want is to realize I want to hook my laptop up to a TV and I can't find or forgot my HDMI dongle, or want to flash an SD card and can't find my SD card dongle...
And 5 years from now when USB-C 11.6rev5 (with a new 100pin physical connector) is commonplace for our 8k Displays, framework laptop owners can spend $40 to buy a new "dongle". The Lenovo owners will be buying a new laptop (or a Dongle / dock. something that physically hangs off the laptop).
Repeat this process as every external connector changes physical layout, non backwards compatible change, etc.
There is no perfectly forward and backward compatible laptop port layout. Lenovo does it one way (which some people like). Framework has gone a different direction (which a lot of other people seem to like).
It’s really not too bad, but I guess it depends on your use case.
Would be great if they could release a dual USB-A expansion slot though, the bandwidth should not be a problem. Or maybe redesign the expansion slots to be more slim so they could fit 6 in a laptop.
I would buy a Thinkpad over the Framework laptop in a heartbeat. Better keyboard (actual arrow keys!), track point, better hinges, matte screen, more ports, better warranty options. The only real negative is that Lenovo chose to solder the RAM in newer models.
Being able to reuse the chassis is good idea but the implementation is lacking.
> Being able to reuse the chassis is good idea but the implementation is lacking.
I find this statement interesting - could you expand on this? IIRC they just delivered on this promise which included (1) a new intel 12th gen mainboard as a drop-in replacement for their 11th gen mainboard and (2) an optional new chassis lid to resolve issues with the original chassis.
Disclaimer: I'm both a Thinkpad and Framework fanboy.
As I understand it, the big selling point for the 4 modular ports is that it allows future motherboards to fit the older chassis. The drawback is that you only have four ports to customize.
What they could have easily done instead is to use the same port layout for as long as possible. Let's say Ethernet, 2x USB-A, HDMI, 3.5mm jack, and as many USB-C ports as they can reasonably fit. That would be good enough for at least 3 years and probably way longer. In my opinion 3-5+ years backwards compatibility is good enough. Of course this is only me saying this from my armchair.
Hate? It's a small laptop, and I feel generally well-designed. Sure it could have more, for example I think the USB-C card should have two ports. I think they are trying to give them enough space for bulky cables, etc. But two small ports should fit in one card.
But it isn't "bad" in any sense of the word. More cards and they'd have to make the motherboard bigger. Maybe on a future 15"?
No? But that might be because I don't need the same options as you do. I however do like the dual HDMI, USB-C, and the extra storage I added to my framework and since your Thinkpad only has 1 HDMI output I think that's rather odd. It's 2mm thicker how do you handle the lack of a second HDMI port?
I cannot imagine a realistic scenario in which I would need two HDMI ports on the laptop. When I need to connect more than one screen (home, work) I would always use a docking station.
Ironically one of the top complaints from mainstream laptop reviewers was that the Framework was too thick compared to a Macbook Air or Razer Blade. I too would have preferred a slightly thicker device with more ports, but you can't please everyone.
And what will happen when you'll want more than the 16GB soldered memory? Or one of the memory modules will fail? The Framework offer the best solution for this situation, and the Thinkpad (as much as I love their older models) don't.
Well the T14 does have one SODIMM slot that takes up to a 32GB module, for a total of 48GB. But I don't want 16GB of soldered memory. I'm not some Lenovo shill trying to sell people on the new thinkpads, I hate what Lenovo has done to the classic line. I would be so incredibly happy for framework to take Lenovo's place. But I need more than 4 ports and have such a hate for carrying adapters, because I've been burned in critical situations after losing an adapter.
I mean, to be fair, it's just their first product / platform? In a few years, imagine having way more form factors and options here to support what you're looking for.
In a near future they could easily go for a very simple solution which is to provide expansion cards featuring multiple connectors (or storage etc) at a time.
I believe they looked at them. The issue is essentially the expansion board that handles the processing of the 2.5Gbit/S and the port combined were larger than the space they had designated.
Apparently I’m in the minority of commenters here, but man is that U-G-L-Y. I really really would be not want to have that non-matching hunk of plastic sticking out of my laptop all the time.
I also have to wonder at the longevity of that design - it’s gonna hang up every time you shove it in a backpack, I’d be concerned about the port hole cracking at some stress points.
A bit sad this is arriving just as I'm getting ready to return my 12th gen Framework.
Really wanted to keep it but it arrived with a litany of quality control issues. Most of those are probably fixable with replacement parts but I don't think there's anything that can be done about the screen wobble.
I'm similarly throwing in the towel. I tested a few units with me and one of my team, both 11g. Mine freezes constantly, even after swapping out the mainboard and going through a loooong back-and-forth of testing with support. Colleague's has thermal management issues from time to time, occasional freezes but not nearly as bad as the unit I'musing. Both of us tested several operating systems from Linux as well as Win 10 pro.
Unfortunately, we didn't get an extended warranty, so its a loss. Hoping Framework can address their quality issues because the notion of extensibility and repairability is fantastic. We'll look forward to checking in on them in a few years after they work through the QA issues. Gotta having a laptop that works.
That's fine (and why I wanted to support Framework in the first place) but do you really expect to buy a new laptop and then have to purchase more parts just to make it work correctly?
> If you have a laptop where the lid angle drops on its own while the laptop is stationary, write into support with a video of it, and we’ll send you a new Hinge Kit.
I actually think this is resonance they talk about in the last section rather than the hinges themselves. The angle at the bottom of the screen doesn't change, it's the top that flexes back and forward. Coupled with the reflective screen it makes focusing on the monitor really hard when it's on a mount to be at eye height (my Thinkpad and a Dell have no problems on the same mount).
For an experimental DIY product like Framework? I do, actually, and it's a large part of the reason why I haven't bought one despite thinking the overall project is a good idea.
That's the whole point of the laptop! You can fix the issue yourself without having to file an RMA, ship it back and buy a whole new laptop. If that doesn't appeal to you and you'd rather buy a whole new laptop because the hinge has a slight wobble then you were never the target audience for this product.
This is entirely the wrong attitude to have here - _everyone_ is the target audience for the laptop. Fixing it yourself is the option that you have _down the road_ once your laptop has had use, not as a replacement for quality control!
I hope I don't regret getting a Framework for my mother. Just the battery life alone has been thinking that a Ryzen Thinkpad would have been a better choice.
Is there anything Framework-specific about this other than the form factor? That is, could I just plug this into any old Thunderbolt port on another laptop and have it fire up?
I've already used my 1TB storage drive card like this many times. Just happens to be my smallest external storage device, and it's a UAS device, so no brainer.
Well now I want a Framework laptop - granted, what I'd use this for is uploading media to youtube etc, and for now free options just aren't as good as Final Cut Pro or Premiere so I'll be sticking to my usb-c to 2.5G adapter for my macbook pro. Someday though!
I like the stuff that Frame is doing, but I don't like the way that jack sticks out. I would be happier plugging a dongle into a USB-C port, when I needed it, than having that 'nub' hanging out all the time.
At least in my experience, SFP-form-factor ports have been ridiculous in terms of the heat that they generate. They're flexible, yes, but power and thermal demands make them impractical for anything that isn't wired to A/C and blowing lots of air.
I really wish SFP+ and SFP28 would be more common in high end desktop motherboards. They’re more efficient and just better, and they can be adapted to plain old ethernet if needed (at a power penalty imposed by switching away from fiber).
Interesting... my experience has been the opposite, but I will admit I am not a network engineer. I've burned my fingers on SFP modules a few times, however.
Do you think that power requirements are different because SFP often uses direct-attach copper?
I'm honestly curious to reconcile my experience with other, more experience persons' reality here.
I believe both DAC and fiber draw about the same power, but DAC obviously has very limited cable lengths. I’m not entirely sure why regular ethernet draws more power, I just know this is a fairly consistent thing wherever I look.
Handling the SFP module itself will more directly expose you to the heat than an integrated ethernet port, so maybe that’s the difference.
Our old RJ45 transceivers always caused the cards to run at the maximum wattage required for the longest supported run of cable. I don't recall if it was a quirk of the spec or a lack of sensing.
I don't think it's entirely a Mikrotik issue as I believe this is just a physics thing - you're firing off one port at a certain speed so the buffer of the other port fills up too quickly unless the faster one slows down (resulting in excessive packet loss.) But it looks like Mikrotik has the most complaints about this.
The solution appears to be to enable Flow Control, but it's never clear in which direction or which port it needs to be enabled on, and I haven't really had any success with any combination.
[0] https://forum.mikrotik.com/viewtopic.php?t=182691
[1] https://forum.mikrotik.com/viewtopic.php?t=181881
[2] https://www.reddit.com/r/mikrotik/comments/rq7ytu/rb5009_25g...
[3] https://www.reddit.com/r/mikrotik/comments/rza5u3/slow_multi...