I take things from point A to point B

  • 53 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle




  • There are two main aspects to coreboot in my opinion that differentiate it from other firmware ecosystems:

    The first is a strong push towards having a single code base for lots of boards (and, these days, architectures). Historically, most firmware is build in a model I like to call “copy&adapt”: The producer of a device picks the closest reference code (probably a board support package), adapts it to work with their device, builds the binary and puts it on the device, then moves to the next device.

    Maintenance is hard in such a setup: If you find a bug in common code you’ll have to backport the fix to all these copies of the source code, hope it doesn’t break anything else, and build all these different trees. Building a 5 year old coreboot tree on a modern OS is quite the exercise, but many firmware projects are near impossible to build under such circumstances.

    With coreboot, we encourage developers to push their changes to the common tree. We maintain it there, but we also expect the device owner (either the original developer or some interested user) in helping with that, at least with testing but more ideally with code contributions to keep it up to current standards of the surrounding code. A somewhat maintained board is typically brought up to latest standards in less than a day if a new build is required, and that means that everybody has an easy time to do a new build when necessary.

    The second aspect is our separation of responsibilities: Where BIOS mandates the OS-facing APIs and not much else (with lots of deviation in how that standard is implemented), UEFI (and other projects like u-boot) tends to go the other extreme: with UEFI you buy into everything from build system, boot drivers, OS APIs and user interface. If you need something that only provides 10% of UEFI you’ll be having a hard time.

    With coreboot we split responsibilities between 2 parts: coreboot does the hardware initialization (and comes with its build system for the coreboot part, and drivers, but barely any OS APIs and no user interface). The payload is responsible for providing interfaces to the OS and user (and we can use Tianocore to provide a UEFI experience on top of coreboot’s initialization, or seabios, grub2, u-boot, Linux, or any program you build for the purpose of running as payload).

    The interface between coreboot and the payload is pretty minimal: the payload’s entry point is well-defined, and there’s a data table in memory that describes certain system properties. In particular the interface defines no code to call into (including: no drivers), which we found complicates things and paints the firmware architecture into a corner.

    To help payload developers, coreboot also provides libpayload, a set of minimal libraries implementing libc, ncurses and various other things we found useful, plus standard drivers. It’s up to each coreboot user/vendor if they want to use that or rather go for whatever else they want.

    credit: [deleted] user on Reddit.




















  • The “store now, decrypt later” is an issue with public key cryptography- which is most internet traffic. Symmetric encryption isn’t really messed up by quantum computing even in theory- your 256 bit thing might become effectively a 128 bit thing, but that’s still incredibly impossible to worry about (there’s some general purpose algorithm that requires a quantum computer that would generally halve the key size I think).

    What is likely threatened by quantum computing are public key algorithms that work on the idea of one way being easy, and another way being hard. Like factoring- multiplication of huge numbers is fast, factoring them is not. Shor’s algorithm is the famous one to be able to do this fast enough given a good quantum computer. But a lot of these allegedly one-way functions would be varying degrees of screwed up in the so-called ‘post-quantum world’.

    In a normal SSL connection, you use public key cryptography to exchange a symmetric key, then you use that. So if you were to record an entire SSL connection and then in the future be given a big quantum computer, you could in theory work it all out- first by undoing the public key initial piece, and then by reading the symmetric key directly, at which point you would be able to decrypt the remainder normally.

    From my understanding, standard notes wouldn’t actually be subject to this, as it never transmits your actual key- you encrypt it with your real key locally, and then it gets sent as TLS stuff. So while the public key could be discovered, and the private key for the TLS session, the actual payload data would be encrypted with a key derived from your password that is never transmitted.

    Now, if it does actually transmit that key at some point, then all bets are off. But it couldn’t really be secure if it transmitted your key anyway right? So it probably doesn’t do that.