Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add position on WebUSB. #193

Merged
merged 3 commits into from
Nov 11, 2019
Merged

Add position on WebUSB. #193

merged 3 commits into from
Nov 11, 2019

Conversation

dbaron
Copy link
Contributor

@dbaron dbaron commented Jul 23, 2019

Here's an attempt to add a position for WebUSB. I'm particularly interested in feedback on the wording here; is this the justification we should use; should we raise other issues, etc.?

@tomrittervg
Copy link

There's concerns with SOP violations as well, if the USB device permits the ability to read data.

@dbaron
Copy link
Contributor Author

dbaron commented Jul 23, 2019

If the USB device can read data from what? system memory?

@tomrittervg
Copy link

I was thinking more like:

  • Can a website interrogate the device to return a hardware identifier? (persistent user tracking across domains)
  • Can one website store data on the USB device (either explicitly or through abusing some other feature) and another website read the data?

This had previously come in FIDO discussions, where existing PIV identity cards were proposed, and disliked, because they had no concept of 'origins' or individual identities per origin.

activities.json Outdated Show resolved Hide resolved
@martinthomson
Copy link
Member

@tomrittervg raises an interesting point. If there are properties of the device that are stable across origins and sufficiently unique, or an ability to affect a stable change to a device, then tracking is enabled. But we don't need more than one reason to label something as "harmful" and a full dissertation on the ways in which something is "harmful" is not necessary.

@bzbarsky
Copy link
Contributor

But we don't need more than one reason to label something as "harmful"

It may be worth listing all our reasons if we think people will try to change the proposal to avoid the harms we have identified....

@martinthomson
Copy link
Member

Please comment on the issue then with this argument. I don't want to lose it and only the issue will remain linked once this PR is merged.

@kgiori
Copy link

kgiori commented Aug 26, 2019

If you want cases FOR the ability to take advantage of WebUSB, the MicroBlocks tool for programming microcontrollers via a web browser would be at the top of my list. The current MicroBlocks IDE is installed natively per OS. It works by having the IDE program "blocks" definitions that run on a "virtual machine" on the MCU board. Communications works via serial over USB. I think "Web Serial" commands would be sufficient. Is there a safe mechanism that can be implemented so that the main MicroBlocks IDE can run in a browser, and communicate via serial protocol over USB to the MCU board?
For example:

  • limit the type of communications that can occur over webUSB (suitable to the MicroBlocks VM but not completely open-ended)
  • allow a MicroBlocks-specific web extension for Firefox that would enable secure communication to the virtual machine running on the MCU

@dbaron dbaron mentioned this pull request Nov 10, 2019
@dbaron
Copy link
Contributor Author

dbaron commented Nov 10, 2019

I've revised this to add a sentence mentioning tracking issues.

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should merge this, but be prepared to update it as we learn more.

@ekr
Copy link
Contributor

ekr commented Nov 11, 2019

Webusb says clearly that it does not expose sensitive hardware but was designed to work with platforms such as arduino. Mozilla constantly opposes numerous standards doing tremendous damage to the web community because of its inability to do things right

This issue isn't solely about "sensitive" hardware, but in any case, the specification does not provide technical mechanisms to protect the user beyond a permissions prompt. An arduino is actually a good case of a device which is dangerous, because it is subject to remote firmware installs.

@dbaron dbaron merged commit 1a20084 into mozilla:master Nov 11, 2019
@ekr
Copy link
Contributor

ekr commented Nov 11, 2019

If you say that webusb is subject to remote firmware installs then why is it allowed on platforms like android?

The Chrome team has made a different cost/benefit analysis than we did.

Mozilla and Safari 10 years in the past and worse stop the progress and web developers have to deal with that burden. Chrome has since 2016 webusb by default and no disaster has occurred.

Actually, I am aware of at least one vulnerability that is a result of this API. https://www.yubico.com/2018/06/webusb-and-responsible-disclosure/

@timonsku
Copy link

Large vendor tools like ARM Keil Web IDE rely on WebUSB. The discussion here seems to mostly be around vague concerns that have not been proven to be problematic. Updating firmware of devices is exactly one big usecase of WebUSB.

@rektide rektide mentioned this pull request Jun 9, 2022
@virtual812
Copy link

Large vendor tools like ARM Keil Web IDE rely on WebUSB. The discussion here seems to mostly be around vague concerns that have not been proven to be problematic. Updating firmware of devices is exactly one big usecase of WebUSB.

The whole point I believe is to think ahead and consider the problems before they become problematic.
Not just rubber stamp approve every new standard that passes the desk and react later when it's found to be a bad idea (often too late).

We don't need ActiveX, Flash, ...name a bunch of other security nightmares that were OK'd because they are used by or come from big business all over again.

Rather than saying this must be OK'd because ARM have a Web IDE, maybe ARM should be considering going about things in a different way.

I'm only lightly knowledgeable on this subject since finding Firefox not having WebHID stopped me from doing something and alerted me to the issue.
But instead of getting upset I've read the material and am now happy Mozilla are taking security regarding these poorly thought out standards.

@timonsku
Copy link

The whole point I believe is to think ahead and consider the problems before they become problematic. Not just rubber stamp approve every new standard that passes the desk and react later when it's found to be a bad idea (often too late).

We don't need ActiveX, Flash, ...name a bunch of other security nightmares that were OK'd because they are used by or come from big business all over again.

Rather than saying this must be OK'd because ARM have a Web IDE, maybe ARM should be considering going about things in a different way.

I'm only lightly knowledgeable on this subject since finding Firefox not having WebHID stopped me from doing something and alerted me to the issue. But instead of getting upset I've read the material and am now happy Mozilla are taking security regarding these poorly thought out standards.

To raise concerns you have to present scenarios that are based on an actual technical underpinning. The scenarios that are being outlined always revolve around very unspecific extremely broad scenarios that are impossible to discuss in any meaningful way because they leave so much room for interpretation. Instead of talking about what exactly the actual technical risks are the bottom line is always revolving around "USB dangerous" and then thats the end of the discussion.

This discussion has been going on for over 7 years now, this is not a new standard this is a very much established standard by now and all I can do is tell my users to not use Firefox because any answer to the raised concerns seems to be either ignored or waived off. It does not seem possible to have a productive discussion with Mozilla on this topic at the moment.
My only alternative to someone not wanting to use Chrome is to then have them install privileged native software which is not only far more dangerous but also a much larger hurdle for users these days who have grown up largely with web based applications. This is a loose loose situation at the moment.

Every single issue brought up in this discussion so far has very reasonable and good ways of mitigating risks but I have yet to see people fundamentally opposing WebUSB to engage with those counter points in good faith. This is both frustrating but also pushes Chromium even further into a role that dominates the market.

No one is saying to just hand waivingly implement WebUSB with no regard to security. What hardware developers like myself are asking is to engage the topic seriously and with an open mind instead of shutting down the discussion every time it is brought up because minds seem to be made up before the discussion is even had.

@fabricedesre
Copy link

No one is saying to just hand waivingly implement WebUSB with no regard to security. What hardware developers like myself are asking is to engage the topic seriously and with an open mind instead of shutting down the discussion every time it is brought up because minds seem to be made up before the discussion is even had.

How do you address the issue with the server side being compromised, and serving code that can damage users devices?

@timonsku
Copy link

No one is saying to just hand waivingly implement WebUSB with no regard to security. What hardware developers like myself are asking is to engage the topic seriously and with an open mind instead of shutting down the discussion every time it is brought up because minds seem to be made up before the discussion is even had.

How do you address the issue with the server side being compromised, and serving code that can damage users devices?

I would first look at what the current situation and risk profile for a USB device is and then talk about if any of those factors would become worse via this new form of access.
If we ignore Chromium for a second then the only way for a user to interact with a USB device is by downloading a piece of software that require Administrator permissions to talk to USB devices that do not use standard USB device profiles that can be made accessible via standard OS drivers.

In this realm absolutely any tool the user installs can pose a risk to a USB device if it has that intention. The hurdle for a user to install something means clicking a download, opening the installer, they get presented by warning dialog by default in Firefox that downloaded software can be dangerous. They have to acknowledge this and continue. They have no way of knowing if the binary they download has been compromised or not. They just have to reasonably assume this is not the case.
The only thing available to the user is to see that they are downloading it from a website that has been verified via a cert but this says nothing about the validity of the data served.
So far this is a similar risk profile when it comes to the USB device and a much larger risk profile for the users system overall as its much more likely that a compromised website would like to gain control over a whole computer with useful data vs a USB device.

The main difference is that a user would download the program for interacting with their device a lot more often if they are dealing with a web based tool. So the exposure potential is certainly a bit higher as native tools tend to update a little less frequently but still, a native tool is also prone to a compromised server any time it runs an update. That is an inherent risk with any piece of software.

Now for the actual risk described here, namely damaging a device with code.
This is one part of the discussion that has bothered me so far as it does not discuss the technical risk here and just sort of assumes that this is both easily feasible and somehow a new risk for USB devices.

USB has by design no way of knowing if it interacts with a malicious piece of code or not. It is in the hardware designers responsibility that data coming via USB is verified. This is the case no matter in what environment the host side code is executed.
If you have a device that allows firmware updates via USB then you sign the firmware and only allow signed firmware to be uploaded. That is a very common practice and the only feasible practice to protect a device from code that you don't want it to run.
Again this is inherent to USB, the minute a device is plugged into a computer it faces this risk.
For other types of USB devices that merely communicate data the attack surface is lower as you would have to find specific exploits via the USB device class that the device implements.
Though again, the risk of being tricked into retrieving malicious host code is present for both native and browser based code.

Accessing a USB device over a web browser brings some security advantages here. The website has to be given access to a specific USB device by the user versus being able to access absolutely any device present on the system. This reduces the risk of exposure for having malicious software scan for devices it deems a target that might not relate to the device the website is normally built for.
The browser can also ensure the user is made aware whenever access to a USB device is made/requested, this is not the case with native software that has installed itself as a service. They then also have to give approval to that request.
That is in my eyes a huge improvement to what we have right now otherwise.

This does not eliminate risk but it mitigates it to a reasonable degree in my opinion.
You face potential enormous privacy and security risk with audio and video device access by a browser and a compromised website could spy on a user and gain information from the user. Yet we allow for such functionality because the benefits outweigh the remaining risks.
Specialized USB devices that would require customized information exchange with a USB device are a much more niche application unlike a webcam or audio device which is present on almost every device now a days and present an enormous attack surface.
With all of these there is a remaining risk, as is the case with every software but we can deem it acceptable given enough mitigations.

Very clear and very explicit modals that inform the user and force the user to make conscious choices are such mitigations and I'm certainly in favour of making these modals much more present and disruptive than the ones currently implemented for audio and video.
The benefits massively outweighs the negative coming from a disruptive UX when comparing it to instructing a user to download and install a piece of native software that can update itself via the internet which is both a higher risk to the user and a higher maintenance burden to the developer.

At the end of the day this and quite few more of the security risks raised are inherent to USB itself. Which does not mean they are to be discarded but we can't change how USB works, that is a separate discussion and is not a specific risk to WebUSB vs native USB. The question is, is it better for the user to have them interact with their devices via downloaded piece of software running fully privileged or is it better to sandbox the code to being able to only interact with nothing but USB and only devices approved by the user anytime an interaction in a new session is needed.

@fabricedesre
Copy link

I would first look at what the current situation and risk profile for a USB device is and then talk about if any of those factors would become worse via this new form of access.

You also need to look at the guarantees currently offered by the Web as a platform and figure out if this kind of proposal is going to make the web look - basically - as unsafe as other platforms. Historically the web gives you peace of mind that navigating to a web site won't cause issues.

@whitequark
Copy link

whitequark commented Dec 30, 2023

How do you address the issue with the server side being compromised, and serving code that can damage users devices?

WebMIDI had the same problem, and Mozilla has shipped it not so long ago. Clearly there is a specific kind of consent flow past which hardware damage is considered acceptable.

Is the issue of device damage something that needs to be addressed for WebUSB at all, given that it is self-evidently okay for WebMIDI? (This isn't strictly on topic, but WebSerial has been rejected with the same justification, while it is--even more so than WebUSB, see below--essentially equivalent to WebMIDI with SysEx, and just like with WebMIDI, you can already install an extension that adds the WebSerial API to Firefox.)


One thing that WebUSB trivially allows that neither WebMIDI nor WebSerial do is the use of the USB devices to attack the host.

For example, any device based on the Cypress FX2LP ASIC (of which there are likely billions, if not more) has a hardware, impossible to disable, impossible to detect during enumeration implementation of a vendor-specific control request 0xA0 that performs (by design) arbitrary code execution. If a user grants consent to a webpage for the use of such a device, the webpage can then trivially elevate its privileges to that of the terminal session by reprogramming the device so that it disconnects and re-enumerate as a keyboard, then types Win+R cmd /c "..." or the equivalent on another OS (which has been widely demonstrated). And this is just one ASIC of many thousands, many of which surely contain similar debug functionality, documented or not.

The consent flow for WebUSB has to take this into account, and I also think it should make this point more firmly than the consent flow for WebSerial or WebMIDI. If you're using WebSerial then potential device damage is almost certainly an understood and accepted consequence of normal, non-malicious use; if you are reflashing a bootloader or driving a 3D printer, it is fully expected that the device may break. WebMIDI has a consent flow warning you about it potentially installing malware, which is quite unlikely, but device damage is definitely possible, and in some cases (intentional firmware upgrade) must be expected. WebUSB differs from both of those in that it can actually, and often very easily, and using only documented interfaces install malware on the host.

@whitequark
Copy link

It turns out that WebMIDI is being used as an experiment to determine whether other negative positions should be reversed: #720. That's a very reasonable position!

@thestinger
Copy link

How do you address the issue with the server side being compromised, and serving code that can damage users devices?

Providing a much more private and secure way to do what is currently done via downloading and running a native application is a privacy and security improvement rather than a downgrade. It's a game changer on desktop operating systems where there's no mandatory sandbox such as Windows, macOS and desktop Linux.

Mozilla's current stance is effectively that downloading and running native code is safer than granting a web site access to a specifically chosen USB device. It's no harder to download and run a native application with Firefox than it is to select a specific USB device and grant access to it in a Chromium-based browser. Chromium's approach is in fact far more private and secure. If you want to make it harder to abuse, you can require users to go into a browser menu to enable WebUSB before access can be requested, where you can explain what WebUSB is and that USB devices could have a vulnerability in them. Firefox already grants extensive access to the GPU by default via WebGPU which is far more of a practical security problem than tricking users into giving access to an obscure USB device with a vulnerability. If these kinds of USB devices are identified, they can be blacklisted, but Firefox doesn't blacklist GPUs when drivers/firmware have known unpatched vulnerabilities including end-of-life drivers/firmware without updates for them anymore and no permission is needed to use them.

Chromium already makes it clear access is being granted to a specific USB device, while downloading and running a native application does not make it clear that it's going to get access to all your browser history, browser logins, browser data and also all your other data from other applications, etc. That is in fact not clear to the average user, especially when the main operating systems they use on mobile devices don't work that way and desktop operating systems give the appearance that they don't work that way since some applications do need to ask for permissions. Some USB devices have vulnerabilities which can be exploited, but there's no need to exploit vulnerabilities with Firefox's current approach since the user having to download and run the native application gives access to everything on desktops and the ability to get access to a USB device on mobile via the same dialog which would be shown by the browser. This is also a rare case where developers are highly motivated to use the more private and secure approach, which is WebUSB, since they can avoid making per-platform applications as long as they give up harvesting a bunch of data, hooking into a bunch of applications, etc. as you can see happening with software for updating a Logitech mouse, etc.

USB devices should be secured against being plugged into malicious hardware. If they aren't, that's already a problem, and the native app you're forcing people to use instead of a web app can take full advantage of that too.

GrapheneOS uses WebUSB for our web installer partly because it avoids users needing to download and run software like Android platform-tools without verifying a signature for it on platforms without these tools available in their repositories. Debian and Ubuntu have horribly out-of-date and broken packages. We avoid these problems with WebUSB and also get portability to far more operating systems including Android and ChromeOS without having to make an Android app. You'd be able to install GrapheneOS from an iPhone too if Safari ever added WebUSB support.

With either the WebUSB or traditional install method for GrapheneOS, all of the firmware and the OS are fully cryptographically verified with a key fingerprint displayed at boot for the OS. Users are in fact not trusting that our website isn't compromised if they verify the key fingerprint against one obtained out-of-band. Android devices supporting installing another OS implement a security model for fastboot where you have to approve unlocking and locking, firmware images are verified with downgrade protection and OS images are also verified with downgrade protection after the device is locked again. Tricking someone into booting up fastboot mode, granting access to a computer and unlocking is not a realistic attack vector. If our website was compromised, then the traditional non-WebUSB install method is only safe for people who use signify to verify the zip before running a script from it, but macOS and Windows users don't have a safe way to obtain signify which isn't simply downloading it from another site without verifying that. With either install method, verified boot for firmware and the OS with the key fingerprint shown at boot and hardware attestation via Auditor provides a verification method avoiding having to trust that the computer you used to do the installation wasn't compromised.

When Firefox users run into the lack of WebUSB support, we almost always convince them to use Edge (included in Windows), Chrome (included in Android) or Brave (tends to be what we get Linux and macOS users to use). It's often the start of them leaving Firefox behind. I'm not sure what is accomplished by Firefox not implementing it beyond frustrating people and bleeding a little bit of extra market share.

In GrapheneOS, our focus is giving users the ability to use the apps and services they choose more privately and securely rather than disallowing them from doing it. We do encourage using more private apps and services, but the OS doesn't force it on people. It's why we have our sandboxed Google Play compatibility layer and just added Android Auto support as an extension to it, Storage Scopes to replace storage/media permissions. Contact Scopes to replace the Contacts permission, etc. because telling users not to do things doesn't work well. Making it possible for them to do it private/securely and making the more private/secure approach low friction is the way to go. People want to get stuff done and will do it in the only way they have available. Not having WebUSB just having people to use native applications, or Chrome/Brave/Edge/etc.

I don't see the downside, especially if it starts out behind a toggle in the browser settings so there isn't even an opportunity to show a dialog unless the user goes out of their way to enable it. There's already the option to get people to download/run native code by default, and no toggle blocking that functionality by default.

@timonsku
Copy link

It turns out that WebMIDI is being used as an experiment to determine whether other negative positions should be reversed: #720. That's a very reasonable position!

I was also unaware of that position. That is great to see and I think that is a very sensible way of approaching it. I would love to see WebSerial receiving the same treatment in the not so distant future, that would make it definitely easier for many device makers that rely on a more generic protocol for their hardware.

I also agree with with whitequarks previous remarks. WebUSB is certainly more difficult to handle correctly and I would like to see a much more stringent approach when it comes to the UI flow.

Another option that @whitequark and I discussed earlier is to implement WebUSB not as a free for all API but with a similar flow as WebMIDI is currently implemented where the driver is installed permanently locally so that a rouge website can't just deploy a malicious driver. That would be a safer implementation than what Chrome is currently doing.
So WebUSB would be an addon API (or maybe a new entity more specific to device drivers) and websites can only directly interact with the drivers interface like they do with WebMIDI.
That could mitigate attacks on vulnerable devices that could be a threat to the host. At least to the degree that drivers could generally be more trusted to a similar level that you can trust a driver that you download from the internet and it relies less on the user going through an abrasive consent flow.
As long as the APIs available stay compatible with the current proposal it also wouldn't necessarily be a huge maintenance burden on driver developers should Chrome stay with their current way of doing things.

@fabricedesre
Copy link

@thestinger tbh I didn't read all of your comment because you don't address the point I'm making. I'm not against WebUSB, I only claim that it needs to be implemented in a way that doesn't water down the security expectations of the web platform, and with meaningful user consent / trust chain. The current chromium WebUSB doesn't provide that.

@thestinger
Copy link

What are those security expectations?

Firefox provides WebGPU without any permission required including with known vulnerable drivers and firmware. It doesn't get disabled when the GPU driver/firmware is known to have unpatched vulnerabilities. There's no dialog to select a GPU to give a bunch of access to a site. It's far more of a real world problem than users explicitly selecting a vulnerable USB device and granting access to it. A website can make extensive use of the GPU without user consent and even when the GPU driver/firmware is clearly unpatched for months, years, etc. Browsers decided GPUs should be secure against remote attacks despite knowing they aren't and made a bunch of access available to them. Why are USB devices so much different?

Is the distinction that USB devices can lack an attempt at providing a compatible security model rather than just having major vulnerabilities in their implementation? It would have been possible to require USB devices to advertise support for WebUSB but it's quite late for that now and existing devices aren't ever going to do it. Still possible to do it, but Chromium isn't going to kill WebUSB support for existing devices so it's unclear how to get devices to add that. It was more realistic many years ago and would be nice, but that's not how things played out.

Selecting and granting access to a USB device with WebUSB is comparable to the friction involved in running a native application but at least involves explicitly granting access to something instead of that going unstated. It's not as if there's any attempt to describe what a native application can do when you open it from downloads. Downloading things is part of the web platform too. There's even a UI to execute files without opening a file manager. Is that slated for removal?

@thestinger
Copy link

There's very little leverage to get device makers or developers to do something new since Chromium-based browsers already have the feature. It would have been nice to require devices to opt into WebUSB but I don't see that happening now. If you want that to happen, Mozilla or Apple needs to make it clear that they will add WebUSB with that constraint and publish a standard for advertising support for it as soon as possible so devices can start doing it. All these years of simply saying no is why that doesn't exist and the possibility of it existing keeps shrinking. Most devices that are relevant could already support it. I'm sure Android would have it for fastboot and adb. Both of those are great examples of dangerous protocols where there's a hidden way to enable them (enabling developer options, then OEM unlocking and ADB debugging, then approving keys for ADB and approving unlocking for fastboot). They're a perfect example of how granting USB access can be dangerous, but only combined with very explicit steps to enable them. If you don't want to allow doing dangerous things with WebUSB, then you don't support what we want from it which is a safer, easier way to replace the OS on a device you own. The whole point is that we want to do something that could be considered dangerous.

Android, Windows and ChromeOS come with a browser with WebUSB support. macOS doesn't, but it's not hard for people to install one. iOS is the only platform with leverage over developers on this and many other things. They can force people to do things the way they want because there's no alternative. The only option is joining their MFi program and using a native app. It's not an option for software not made by the device vendor themselves. There's always an easy alternative to using Firefox, usually one people already have, unlike iOS where people own the device but can't do what they want..

@whitequark
Copy link

whitequark commented Dec 31, 2023

@thestinger

USB devices should be secured against being plugged into malicious hardware. If they aren't, that's already a problem, and the native app you're forcing people to use instead of a web app can take full advantage of that too.

To reiterate, this is not possible to do for devices made on some very popular ASICs, and that's not something that will change any time soon. "Should" isn't something that will change decades of existing practice of trusting the host, nor is it something that will create cost-effective ways of adding USB capabilities to hardware.

(The comparison with the native app is irrelevant since the browser is providing a security boundary where a desktop does not. The browser has this problem because the browser is better at enforcing boundaries than the desktop, and because there is interest in preserving this enforcement of boundaries.)


Just to be clear, I do not want to apriori rule out reprogramming of USB devices through WebUSB. I also have a use case relying on that entirely. I want to rule out unintentional and malicious reprogramming of USB devices, and I am saddened by the fact that this is very difficult at best.

@thestinger
Copy link

thestinger commented Dec 31, 2023

Our use case is reprogramming the USB device by installing GrapheneOS on it. Android can and does provide software-based USB peripheral functionality from the OS: MTP/PTP file transfer, Ethernet tethering, MIDI and now also webcam support since Android 14 QPR1 including using the microphone(s). It would be entirely possible and quite useful for it to also provide the ability to use it as a keyboard, touchpad, etc. It shows that it could be trivially used to attack the host by acting as a keyboard. This is our use case which is completely reasonable and very well secured.

We secure TLS to the extent that's possible with current browsers. We use CAA with accounturi pinning, and we would pin the Let's Encrypt roots along with backup leaf keys via HPKP if we were still allowed to do it, but Chromium/Firefox removed it. We use DNSSEC + DANE, but Chromium/Firefox have refused to support even a subset of these for key pinning alongside WebPKI. Chromium/Firefox provide no way to use the baseline Android app signing security model where a key is pinned on initial usage, downgrades are prevented and the key can be rotated with authorization from the previous key. if you want to provide better security for web apps, there are many ways to do it and providing offline signing is one example. We'll happily do offline signing for the web installer where users have to install it as a web app and then there's key pinning and downgrade protection going forward. Mozilla doesn't allow us to do these things, and in fact Mozilla has taken a position against standards related to this use case. Mozilla refused HPKP and through that removed support for proper end-to-end encrypted web apps with offline signing, which were implemented via using a service worker to do HPKP suicide and then update the app with signature verification and downgrade protection. That was possible in a roundabout way, and now it's not possible, and related standards have been rejected outright instead of making something acceptable to Mozilla.

Users have to enable developer options, enable OEM unlocking, reboot the phone in fastboot mode, trigger unlocking from the browser, accept a dialog on the phone via volume up/down + power and then the browser can freely flash the device. The extra authorization steps matter since an attached computer shouldn't be able to those without permission. If a device allows that then that's already a problem without WebUSB. Anything it's plugged into including a charger could flash new firmware/software. A significant majority of users are already on Chromium-based browsers with WebUSB. Many Firefox users already have a Chromium-based browser around for the ever increasing number of sites not working in Firefox because of cases like this one.

The vast majority of GrapheneOS users install it via the web installer. We strongly encourage it and discourage anyone inexperienced from using the CLI install process instead. There's very little to use the CLI install process for most users. There are some benefits if you're on a distribution like Arch Linux where you can get signify and up-to-date fastboot from the distribution repositories so you can verify the zip signature before flashing and then verify the key fingerprint again after flashing as web install users will be doing. Developers will be using CLI flashing, so they might as well learn how to do it, but flashing development builds is a different process and the OS build produces a fastboot executable so you don't need it from elsewhere.

Mozilla's positions on many of these standards are the same as Apple, and Apple's position is clearly based on anti-competitive behavior to promote than app ecosystem. Mozilla is doing the same thing: promoting native apps over web apps. Mozilla's position is "no, use a native executable which we allow you to easily download" rather than "no, that's too dangerous" because you do not make it difficult to download and run native executables.

(The comparison with the native app is irrelevant since the browser is providing a security boundary where a desktop does not. The browser has this problem because the browser is better at enforcing boundaries than the desktop, and because there is interest in preserving this enforcement of boundaries.)

Downloads are part of the web platform. There are even programmatic APIs for downloads. Running an executable directly via Firefox's UI is something that's fully supported and there's very little friction involved. On Android, Firefox has support for being granted the ability to request to install/update apps. It will directly request permission to be able to install/update apps. It's part of what it was built to provide. I don't think that can be ignored. It's not something that is provided as part of providing downloads but something that has been explicitly added. Otherwise, you'd have to open the executables in the file manager.

Some desktop platforms have terrible code signing systems (macOS, Windows) and may add an extra dialog to this, but it's not a lot of friction. It's relevant that this is already easy for users to do in a much less private and less secure way, which Mozilla is forcing users to do through not providing a better alternative. WebUSB is a better alternative to downloading and running a native executable from a website. WebUSB is not perfect, and WebUSB can be dangerous, but the fact that it's a privacy and security improvement over the status quo makes not including it anti-privacy and anti-security. The real world privacy and security provided to end users is what matters, and in the real world downloading and running a native executable is the status quo. Chromium replacing that status quo has been very beneficial whether people like it or not.

When Mozilla was making FirefoxOS, these features were explicitly on the roadmap. There was a goal of making web apps as capable as native apps, including providing these capabilities and a signing security model comparable to native apps. At some point, Mozilla forgot they were trying to turn the web platform into an alternative to platform specific native apps.

If certain capabilities are regarded as too dangerous for regular web pages, then provide proper installable apps with key pinning. key rotation and downgrade protection. Could even make an app store for them. If all we had to do was bundle up our web installer into a Mozilla app store bundle and submit it, we'd do that, but we're not writing code specific to Firefox.

I think the problem is ultimately that Mozilla simply says no instead of having a vision for capable web apps and building that vision. Google are the ones with a vision for web apps and are the ones building that, so Mozilla gets so no say in how it happens because they're not acting as the builders but rather critics on the sidelines providing no alternative.

@whitequark
Copy link

If certain capabilities are regarded as too dangerous for regular web pages, then provide proper installable apps with key pinning. key rotation and downgrade protection.

I do agree that this would be a strict upgrade compared to downloading a native executable on a desktop platform, and I do not see a rational argument based on security concerns against providing an API like WebUSB behind this requirement. (It would be equivalent or better than installing a native extension polyfilling WebUSB through the use of native messaging, which is already possible, as demonstrated by the WebSerial polyfill.)

@zb3
Copy link

zb3 commented Aug 13, 2024

After reading this thread, I'm shocked.. I didn't expect Mozilla to be that unreasonable and detached from reality..

The web is more secure than native apps because here we have permissions. While a proper signing (not TLS) would make this it even more secure, you can't expect the developers to wait for Firefox to ship it..

They just instruct their users to use Chromium-based browsers instead, which makes Firefox fade further into irrevelance.. this is happening right now, in the real world.

Wake up, Mozilla!

@mozilla mozilla locked as too heated and limited conversation to collaborators Aug 13, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.