Skip to content

Latest commit

 

History

History
956 lines (509 loc) · 129 KB

jan-30.md

File metadata and controls

956 lines (509 loc) · 129 KB

30 January, 2023 Meeting Notes


Remote attendees:

| Name                 | Abbreviation   | Organization       |
| -------------------- | -------------- | ------------------ |
| Waldemar Horwat      | WH             | Google             |
| Frank Yung-Fong Tang | FYT            | Google             |
| Santiago Diaz        | SDZ            | Google             |
| Ashley Claymore      | ACE            | Bloomberg          |
| Daniel Ehrenberg     | DE             | Bloomberg          |
| Rob Palmer           | RPR            | Bloomberg          |
| Peter Klecha         | PKA            | Bloomberg          |
| Michael Saboff       | MLS            | Apple              |
| Dave Poole           | DMP            | Apple              |
| Josh Blaney          | JPB            | Apple              |
| Ujjwal Sharma        | USA            | Igalia             |
| Philip Chimento      | PFC            | Igalia             |
| Ben Allen            | BAN            | Igalia             |
| Nicolò Ribaudo       | NRO            | Igalia             |
| Kevin Gibbons        | KG             | F5                 |
| Richard Gibson       | RGN            | Agoric             |
| Chip Morningstar     | CM             | Agoric             |
| Yulia Startsev       | YSV            | Mozilla            |
| Eemeli Aro           | EAO            | Mozilla            |
| Daniel Minor         | DLM            | Mozilla            |
| Jordan Harband       | JHD            | Invited Expert     |
| Sergey Rubanov       | SRV            | Invited Expert     |
| Duncan MacGregor     | DMM            | ServiceNow         |
| Chengzhong Wu        | CZW            | Alibaba            |
| Tom Kopp             | TKP            | Zalari             |
| Linus Groh           | LGH            | SerenityOS         |
| Michael Ficarra      | MF             | F5                 |
| Istvan Sebestyen     | IS             | Ecma International |
| Luca Casonato        | LCA            | Deno               |
| Justin Ridgewell     | JRL            | Vercel             |
| Willian Martins      | WMS            | Netflix            |

Intro

Presenter: Ujjwal Sharma (USA)

Stenography support

USA: The committee agreed by consensus to work with a stenographer, with no objections or opt-outs from note-taking, details at https:/tc39/Reflector/issues/460

USA: the next meeting is 21st to the 23rd of March and it will be a hybrid meeting. Thank you to everyone who give the feedback in the survey. We have you quorum ahead of time. We have to approve the meeting minutes. Waiting for objections. I assume you approve the last meeting minutes. Does anyone have any objections against the current agenda? No as well. All right.

Report from the TC39 Secretariat

Presenter: Istvan Sebestyen (IS)

  • (tc39/2023/002.pdf and tc39/2023/004.mp4)

Note: Some notes skipped while technical details sorted

IS: Now I was in the middle of it and I was at The point with the reminder and the reminder this is reconfirmed again for five Years. One standard is the JSON standard, I never know the Number on the ISO side because it is for me just the Five digit number but it is the JSON and the JSON is up For approval by March. 2023 and the other one – make Sure that the S C2 2 vote that it is approved. And then the next point it is easy next EXO meeting and This is published everywhere else and some related from The last ECMA general assembly. So now I go here to the next one. Do we see it now? I now moved forward. This is now the latest. This is TC39 document.

RPR: Yes, we see the document list.

IS: Okay, good. Basically I’m not going to repeat it. These are the document that are parallel to the GIT hub And put on the official ECMA website. This is basically what is guaranteed forever they are Kept there. If there are any important document that you hear from The standardization process is very important and we Can make this here and then it will stay there forever. So here for the – since the last meeting we don’t have Too many. This is the list. This is 39. Now I am at the next slide and then the relevant ECMA General assembly document only a very few left because In the November meeting I have already presented all The document that went into the December meeting and This is basically the documents which are coming since Then. One important information is here which is – came to Me as a surprise is this VTROMA of paypal and this Affect us in TC39 and also I guess this is one of the Reason why AKI has removed herself from the participant Participants. I’m also joining you in thanking her for her long, long Contribution to ECMA and to ECMA TC39. So this is the list for G document 2022 and now I’m at The next slide for 2023. And there is one document which I have taken data off,

IS: This is the 001 ECMA statistic for 2022 and the other One this is also just a mirror, the agenda of this meet Meeting. We don’t have more documents here for the general Assembly. Why are this interesting for TC39 TC39 member and parallel Because we have here this double documentation. One is the official one on the TC39 website. The other one is TC39 is using basically for Internet Purposes and here basically this is the GIT hub and Also the other two that we are using. I go now to the next one. Now the next one, this is new. And this came from a discussion that I had with Patrick LUTE and I have already reported that I’m always Getting I would say repeated complaint about our long Meeting minutes. And obviously the reason for the long meeting minutes Are first of all, you know, the long technical notes And those ECMA members who are not interested in the Detailed work of TC39, they are not interested in the Technical notes and many of them have the feeling that The first part which would be the summary part, it is Not really formative enough. So Patrick suggested that we should improve the part That we are outputting, what kind of contribution that Were presented to the TC39 meeting. We are publishing all the contribution that we have Received and we can collect after the meeting, you knowp Know, the different slides. This is always published and stored this ECMA document. But here the issue is why don’t we ask each Contribution to provide us a very short one part – one Paragraph of short summary what their contribution is About. And then we can also from the secretariat maybe also Copy the resolution, what has been then decided on this Contribution. So here the request would be if everybody agrees with It, pleased to provide Patrick or to myself a paragraph Of your contribution and then we could include it into The next meeting minutes in order to improve a little Bit the quality, general quality of our minutes that we Are preparing I go to the next one. The next one is here. The membership changes. Basically there were a number of companies which were Formally approved by the general assembly meeting in December that could already participate and then some Of them indeed participate in the TC39 meeting. The only thing was that I could not formally vote. We didn’t have a formal vote. Associate member is ordinary member and then we have Received this withdraw letter in the secretarial secretariate in November ‘22 from paypal. I have to tell you and remind you because this is a Repeated mistake all the time that if any company wants To withdraw from ECMA, then please, please do do do it before The 1st of October. If you don’t do it before the 1st of October, then Automatically it prolongs for one year and then we have The discussion, you know, is it too late? Is it not too late? Are you going to pay? Are you not going to pay? Et cetera, et cetera. Fortunately it is not my business anymore. But Patrick has to deal with it. But we have the same situation also now with paypal. So for the next year, if somebody wants to quit from ECMA, which of course, you know, I would not wish and Recommend to you, but if it has to be the case, then Please do it before the 1st of October. So this is regarding this slide and let me go here. The next slide is recent TC39 meeting participation. I go immediately to the second page (slide 11) which I don’t know Why – maybe if it is in presentation mode, I don’t Know. It is on the next page. I don’t know why. The point is I can also tell it verbally, you know, so It was nothing dramatic than what we have seen before. We have a steady participation in the meetings. More or less with the same number. Nothing really exciting. I haven’t the slightest idea why I cannot show it to You. I go to the next page which is about the information That I have taken from the ECMA document 002-2023. Regarding the statistics. On this slide, you see the ECMA website, the entire Website.

RPR: We have one minute to go I would say.

IS: How many?

RPR: One or two?

IS: I’ll use two.

IS: So I would say please read it. Next one or so, this is the ECMA website page access 2022. The other one was 2021. I always take it two years you can compare it. It is basically very similar the two years. The same is also true now I am on the next one for 2021. Regarding the ECMA PDF standard down loads. They are very, very similar to the ‘22 figures which is On the next slide and you can see here in 2021, the Share was 58% of all of the down loads ECMA TC39 was Dominated in 2021 but the same is also true for 2022. Now it is a little bit less 55%. But we are still dominating here the scheme. Here these are the access documents. The first access document, so HTML access, I have only Taken the four last year additions for ECMA 262 and for ECMA 402 and you can see also there are significant Number approximately a factor of four between the down Downloads and between the access number. So you can also read here the statistics by yourself. Next slide is the plenary schedule TC39. You know it also from the invitation to this meeting. So I can switch. Also regarding the rules that has been also published On the GIT hub and it is just taking repetition and Here it is coming five or six paces. It is just repetition regarding the ISO renewal of the Two standard. Here I am not going to read it through again. I just – because I have already presented also at the Last meeting in a little bit also I said it here and so I am not going to present it. The two menus are also not terribly new and important Because we have already seen it. So this is for the next year’s general assembly meeting Meeting. One in Japan in Tokyo and the other one in December in The U.S. There is no place for the ExeCom meeting. It is my fault. So I don’t know. So ExeCom meeting and then regarding the – here are the ExeCom meeting and the next one, the last one I have Announced already in the December meeting who is for President and vice-president and treasurer and ExeCom Member for 2023. All approved. No surprises. Congratulations to them. And SAMINA has been approved by the new secretarial secretary General. That’s it. Thank you very much.

Note The presentation is fully included in the slides (tc39/2023/002.pdf) and also as audio/video in tc39/2023/004.mp4.

RPR: Thank you. One question from DE.

DE: Yeah. I wanted to speak to IS suggestion that we capture the summaries. I want to know in the minutes the summaries are already separated out. If the TC39 secretary IS wants to collate those and put them in the minutes that’s possible. With links to reference materials.

DE: If the secretary doesn’t have time for that, it would Be welcome if somebody else in committee did this. LEO did this in past meetings. We’re behind in the summaries. I think it would be welcome. I think the pressure taken off from transcribing should give us all a bit more energy to do this important task of making accessible summaries at meetings.

IS: So if you give us the document, you are talking about, of course, then we can also do it.

DE: : You already have the document. This is the minutes that we give you every meeting. Every heading will have a summary at the bottom. You can look at all those different minutes documents That you have submitted to the filer. They all have summaries and it is just a matter of collating them.

IS: They’re in the technical notes?

DE: Yes. We can cut it short between you and myself. If I know, we can also take it ourself. Because we have to take out also the summary, the decision. I’m talking about the decision but here –

??: That’s right. The decisions are all listed in a section in the notes For each particular topic.

IS: I have no problem with the decision part that is always at the end. I have problems with the summary of the contribution,.

DE: So to summarize the contribution, for each Contribution Contribution there’s link for supporting documents. I think we can provide the links and the authors and List the conclusion. That would be a useful start for summary document. Of course, more useful to have a summary of the Discussion. But that’s more involved. You have a link, a reference to the contribution.

IS: Okay. So then I suggest that we cut it short outside of this meeting. In order because I have the slight feeling that we are talking about two different things, you know, but we should not take the time of the committee on that. Because I think in a bilateral way we can clarify very easy.

DE: I agree. But I want to emphasize it would definitely be useful to have more detailed summaries of the meetings. If anybody wants to get involved in that, then please, You know, be in touch.

IS: Okay,. So contributors are always welcome for sure. Thank you.

RPR: Thank you for this. Let’s move on.

Update from Chairs

Presenter: Rob Palmer (RPR)

RPR: So the next item I’d like to give some reminders about Some things on the reflector. This is from the chairs which is that we have a couple Of requests for feedback on that. So these are requests for feedback on – from the Editor group and the chair group. So if you have any kind of like private feedback that You want to relay, these are anonymous surveys. I’m putting them in the chat here now, some links Any time the next couple of weeks would be appreciated. This is not about elections. This is general feedback for the groups. Then separately on the subject of elections, at the Moment, there is no proposal to change the editor group Group. So at the moment, we are not on track for any kind of Election. Obviously people are always welcome to suggest it if They think there are different people available. For the chair group, there is an explicit request for Nominations. So that is still open. We are looking for both a chair and the facilitator. So I’m going to put that one on the chat here as well. So we still have a number of couple of weeks. We hope to get nominations by 13th of February. If you would like to help out with running the meetings And some of the things just talked about, that would be Most appreciated. The aim is finish nominations in two weeks so we can Have an election at the next TC39 meeting for the chair Group.

??: One more thing, we also need a TG3 chair as reminded In the chat.

??: Yes. And MF points out we’re looking for a TG3 Chair and someone who likes security please do so. That’s all from me. Next up, we have Kevin GIBBONS with ECMA status update Or maybe someone else from the group.

ECMA262 Status Updates

Presenter: Kevin Gibbons (KG)

KG: So this is the usual editor’s update and going over editorial And normative changes. Very little in the way of notable changes. We are of course continuing to make our usually metrics at Clean up and consistency. The only change that is worth calling to the attention of Plenary was this one 2681, which is a tweak to how the “code evaluation state” is trapped. This is relevant for the machinery, for generators and AC Functions and not relevant for those not looking at those and It is a nontrivial change to that machinery. If you previously looked at it and been confused hopefully the Machinery is more sensible now and the other improvements in The type line for those as well. And then normative changes we’ve landed:

KG: The first is this 2819 that is a Tweak to the mechanics of the generators that we got consensus For at the previous meeting or the one before, I forget which. Possibly the one before that. This all got consensus recently the one out for long. 2819 test landed and chipped in a couple of – 2905 is not Actually a normative change. It was this change to the way that the module importing machinery is wired. This is in order to make it easier to do some of the module Related changes that we have coming through proposals. And there’s a change for integration on the HTML side but no Actual immediate normative for – and to 2973 is this sort of Web reality change that in the atomics machinery allows Browsers to optionally make time outs somewhat larger as part Of spectrum integrations. This is something that some browsers are already doing and Many browsers feel they need to do. No notable other sorts of changes to the specification or Environment since then. And then in temples of upcoming work, basically the same list List, I don’t believe we have added anything to this (slide 5). I’m not going to go through it again. But just a note we are still working on refactoring a bunch Of machinery for clarity and consistency. That’s all we had in terms of the editor update.

Conclusion/Decision

  • Normative changes:
    • #2819: Avoid mostly-redundant await in async yield*
    • #2905: Layering: Add HostLoadImportedModule hook
    • #2973: Allow implementations to pad timeouts in SuspendAgent
  • Editorial changes:
    • #2681: Use Abstract Closure to set the code eval state

ECMA402 Status Updates

Presenter: Ujjwal Sharma (USA)

USA: So hello everyone. I wouldn’t take a lot of your time. And get right to the point. Last meeting if you remember, we presented a couple of Normative issues for for for approval and this one wasn’t approved Because of the creation of this request #729 and the meeting itself Itself. Took a while for us to get around to it. But this has been reviewed by the TG2 and the implementers Have confirmed that this is a good change. So I would like to ask for committee consensus on this one.

RPR: A point of order from DE.

DE: Do we have a conclusion for the previous topic? For KG’s update?

KG: No. I don’t think there’s ever a conclusion for those updates. I mean, it’s their updates.

??: Just FYI.

??: Okay.

DE: We’re talking in the chat about for each of these topics Going forward having not just a conclusion of what we got Consensus on but like a short summary that we could put in the Minutes. Kevin, do you think you could write one of those briefly or we Can talk about this off line?

RPR: I’m happy to do reminders for those at the end of each Section.

KG: I can put something in the notes. I don’t think it is – there’s much to say but I can put Something.

DE: Great, thanks.

USA: Apart from that, nothing to add. Nothing on the queue. So I take it that folks are not against the change. It changes the error handling sort of snippet here in ECMA to Use the correct starting year that is one not negative zero.

RPR: Thank you. DLM is Plus one on the change. And the queue is empty.

USA: Perfect. Thank you all.

DE: We’re recording the change as the conclusion?

USA: I could add that to the notes.

??: At the end of each item, because the transcriptionist is Giving us obviously the play by play, please can the presenter Write up the conclusion with the main points. This is the usual section that we have in the notes where it Says conclusion.

??: I’d ask that it include brief rational and discussion point Points that were especially critical.

??: Thank you.

Conclusion/Decision

Presented PR was approved with positive notes from DLM and PFC.

ECMA 404 Status Update

Presenter: Chip Morningstar (CM)

RPR: Chip, do you have anything to say?

CM: No news is good news.

RPR: Thank you.

Conclusion/Decision

  • ECMA 404 remains stable. No changes or updates.

Test 262 Status Updates

Presenter: Philip Chimento (PFC)

PFC: We don’t have slides. We prepared a couple of paragraphs which I will read out and I can paste these in the notes. If there are no questions, they will be the conclusions. They’re very short.

Conclusion/Decision

For stage 3 proposals, we now have tests for isWellFormed. We've had recent PRs making progress on coverage of Array.fromAsync, RegExp modifiers, Temporal, Intl.NumberFormat V3, and Intl.DurationFormat. We'd love help on others; some of them already have volunteers so ask in the "TC39 Test262 Maintainers" channel on Matrix if you're interested, to make sure we don't have overlapping efforts on the same thing! Our trial run of our new RFC process was successful and we've used it to make some adjustments to our draft process document, which will become official soon. In further news about contributor documentation, we're preparing a document explaining the rationales for some existing choices made in the test262 codebase. We're hoping to add to this as new questions come up so that it's a place where contributors can get answers to the question "why is this like this, and what should I to know if I want to change it?"

RPR: Thank you PFC.

Updates from CoC Committee

Presenter: Jordan Harband (JHD)

One minor incident to report (details redacted from public notes); nothing further.

Documenting Stage 3 proposals which are not ready to ship

Presenter: Daniel Ehrenberg (DE)

DE: To review the TC39 stage process Stage 1 we have it under committee. Stage 2 we want to do this. We have a first draft that looks okay. Stage 3 we have hopefully basically final draft ready to go for implementation. Stage 4 we have two or more implementations and a test. So goes into the draft standard. But what happens between stage 3 and stage 4? It can be a little subtle.

DE: Implementation: implement the specification and land tests. Some bugs are discovered and resolved and there’s also of course bugs can and should be discovered before stage 3 but sometimes it comes up. There’s also integration with host environments if that hasn’t already been done. The committee may come to consensus on normative changes during stage 3 just as it may come to consensus on changes for any other part of the specification that’s fully landed. Hopefully these aren’t too big. Between stage 3 and stage 4 some with the proposal and some embeddings in environments and have backwards compatibility constraints.

DE: The question is, when does it make sense for everybody, not just browsers, to ship? Usually things are ready to ship when they reach stage 3. And that’s a flow that I’d like. I think it’s been working well for us. And now let’s maintain it.

DE: Occasionally we may want to wait before shipping stage 3 proposals. One reason is because there may be officially conditional Consensus we may say some things stage 3 once some condition is met. There may be known issues, that can be the group wants to resolve and share they haven’t resolved and haven’t done yet. One important class of the issues is host environment integration. If the host environment hasn’t been integrated, then that environment can’t ship the feature yet. You know, pragmatically something that happens is that one engine kind of waits for another engine to have implementation experience before deciding to ship.

DE: We have a number of – just a couple, stage 3 proposals that are not ready to ship. One is Temporal. We keep having normative changes for Temporal. But it’s mostly stable. And in general stage 3 is usually stable and should still be understood to be so. We had ShadowRealm almost shipped in Safari with HTML integration being unfinished. The goal of the presentation is make a way to document clearly when essential pieces are known to be missing to reduce the chance of having disagreement between implementations and when things are shipped. What we want to avoid is a compatibility matrix. We want to avoid a state where different engines ship or different implementations in general ship different things or different subsets of things, and then application developers have to worry about not just is this feature there or not, but which version of the future it is?

DE: So my suggestion here is to document in the proposals repository, maybe in a separate column just for stage 3. Proposals saying this proposal even though stage 3 is “not ready to ship”. The strong default would remain that Stage 3 means something is ready to ship. We would be documenting the exceptional cases. There’s a question of how you would set this value. My suggestion is that the proposal champions themselves set the value. And of course this is considered non binding. It’s fine for engines not to ship this column marks and ship things that do have it marks, but it’s a clear central documentation point to make sure that we’re on the same page.

DE: Some feedback I have gotten so far about this proposal: some people like that it’s lightweight like this idea. Some people argue there should be a mechanism for settling disputes so in case the champion is kind of being negligent in saying their proposal fine, but then other people say there’s a real problem, then in that case, we should have some way of settling that. I think it’s important that we work it out if the situation like this comes up, but at the same time the strongest version of this that one could imagine ever requiring consensus on becoming shippable (“Stage 3.5”). I think that would add significant friction to our process of having an additional fifth consensus seeking stage. And I think there’s – I’m not the only one in committee who thinks that. I’m not so keen on the very strong version of this. But certainly if there’s any kind of disagreement on the marking of a proposal, it’s fair to bring up anything like that at all in plenary.

DE: Another possibility is we could say try to reinforce the core meaning of stage 3: that stage 3 is really completely done enough, and stage 2.5 could be – you know, you say this is complete, and I want to get buy-in on this being complete but still we have these integration concerns to resolve. And that could also be nice. But I think we would also have some challenges in getting sufficient interest in doing the serious also of stage 2.5 proposals. We have seen that in the past, sometimes we only get a serious look at something once it’s at Stage 3. That’s when we discover the late issues. Then there’s the question of what to do with things that are already at stage 3.

DE: So that’s why, overall, as a sort of compromise to start with, I think having this purely advisory, initially-just-set-by-champions column could be a decent place to start with and then over time we can work on dispute resolution mechanisms maybe based on if there’s a particular case where there’s a dispute, then we could work that out and build that into a process.

DE: So I have the PR for this in the how-we-work repository. And I’m wondering what people’s thoughts are. Thank you.

DLM: Thank you. We discussed this internally at the SpiderMonkey team and we’re in favour of this. Explicit documentation is a good idea and we like the idea. This is a lightweight process and I agree that the idea that any sort of dispute resolution we can worry about that, if and when the dispute comes up. Given the idea this is non-binding and seems like we have a dispute resolution mechanism built in. Anyways, yes, support for this. Thank you.

MLS: The process document is pretty clear. Implementation types expected at stage 3 are spec compliant, but not shipping. That is the time for implementers to try things out and work out the bugs. At stage 4, the implementation types expected are shipping implementations. Now, saying that, we do implement things at stage 3. We’re not super eager to ship them, they usually go into our technology preview, nightly and things like that. I don’t want to change the process document to say something is shippable or not shippable at stage 3. Certainly, implementors can decide if they want to ship something that is at stage 3, that is their own decision.

DE: I’m kind of surprised by this comment because I thought that JSC had repeatedly shipped things to the main version of Safari at stage 3.

MLS: It’s rare that we do, unless we have high confidence a proposal is stable and isn’t likely to change. Typically we “ship” a stage 3 proposal in our nightly or technology preview. I made this same statement in the past. This is the way we tend to work in WebKit.

DE: I mean, the goal here is to build high confidence in stage 3.

MLS: Agreed.

DE: Or to maintain it. We had pretty extensive discussions around when private fields and methods were at stage 3 for a while and so I thought at that point we had established the shared goal at things of Stage 3 being by default shippable.

MLS: They’re not. Look at the process document. At Stage 3, implementation type expected – are spec compliant. Implementation expected at Stage 4 are shipping.

DE: We had quite a long back and forth about the semantics of this text. I think it’s just genuinely ambiguous and we can form different kinds of shared understanding about the way we want to do things.

MLS: I don’t share the understanding you’re talking about. Again, something is not shippable unless it’s fully stage 4. And even stage 4, sometimes we have to go back. And that happens. Stage 3, yes, we and other implementers will implement various features that are at stage 3 but we certainly are reluctant to ship things that are stage 3 to release versions of Safari. The JSC engine is used for all kinds of other applications.

JHD: MLS, I would say that my interpretation of the process document is that things have to be shipped in order to get Stage 4 which means that in fact, they are shippable at stage 3 - but they’re not required to be shipped at stage 3 which is why it’s fine that safari or any other implementation would choose not to ship until stage 4. I don’t see how there’s any argument it’s stage 3 isn’t shippable because if nothing is shipped in stage 3, nothing will ever get stage 4 according to the process document. That’s always how we’ve interpreted it.

MLS: JHD, I disagree with that. We have Test262 tests required for stage 3. We use those to test the implementations. Obviously implementations also do the other testing, like making sure It doesn’t break something else, has good performance and things like that. Again, stage 3 is a feedback process. We are not done when we’re at stage 3. And if we, TC39, think that at stage 3 we’re going to ship betas to the world, we get ourselves in a place where there’s difficulty. We’re going to talk about an issue of that later in this meeting were we have gotten ourself in some difficulty with an implementation shipping something that is stage 3. I think we would like to avoid these types of problems.

JHD: I hear what you’re expressing; I’m not arguing that point at all. I’m saying maybe we’re using different definitions of the word “shippable”. I’m saying that the process document always said that entrance criteria is shipping implementations. You can’t enter stage 4 until a proposal has shipped. I think the word ‘shippable’ like according to that definition means that it happens in stage 3 for someone.

MLS: The criteria does not say shippable. It says compatible implementations that pass.

JHD: And “Significant in-the-field experience with shipping implementations”. Something cannot be shipping until shipped, and must thus be shippable prior to stage 4.

MLS: Something like the Safari Technology Preview or Chrome Canary meet that criteria.

DE: The thing is that historically the process document text was written deliberately ambiguously because there was a standing disagreement whether the process should work the way MLS says or the way JHD says. That’s why it’s amenable to the different interpretations. I had thought that in the past few years we had kind of arrived at a consensus on one of those readings. It sounds like we haven’t. And so we will just be coexisting with the two readings.

DE: In that context, the question is whether it would be okay to document in the proposal repo that something is specifically not shippable. That doesn’t imply – I mean, at least logically doesn’t imply, that if that column isn’t set that it isn’t shippable means we’re raising a particular issue given that we live in the world where not just browsers but also tools and runtime environments do sometimes ship stage 3 things marking something is not shippable will be a signal to them. Those people who believe it is appropriate to do this thing that Apple says they don’t want to do in the future. How do you feel about that?

MLS: We haven’t changed our point of view. And if I understand correctly, a lot of these features are put behind flags. we do that, I know other implementations do that as well.

DE: So I don’t want to assert anything about your past or present point of view but it’s clear that your point of view is not the universal one. So given that –

MLS: There is no universal one. That is the point I’m making! There is no universal one.

DE: I agree. So I’m wondering given the context –

MLS: Good.

DE: I mean, it’s news to me. But, you know, it’s –

MLS: It shouldn’t be. I have made this point in the past.

DE: So given that context and given there isn’t universal point of view how do you feel about including not ready to ship in the proposal repo in the column for stage 3 proposals.

MLS: So by definition in my mind, that’s on any stage 3 proposal. It is not ready to ship.

DE: You’re opposed to documenting this because you think it’s inappropriate if anybody ships a stage 3 proposal and that’s kind of on them.

MLS: They can do that. But I don’t think we want to go on record, that TC39 says if it’s stage 3, ship it.

DE: So, yeah, this disagreement about the meaning of stage 3 is noted and unfortunate. Given that, you still don’t think we could give signals to this subset who has the differing interpretation?

MLS: Stage 3 the signal is implementers should implement this and make sure it passes acceptance test and should get some experience with it either internally or maybe with flags externally. That’s how I read the documents as they exist right now. When we’re at stage 4, we are ready to ship.

DE: So you’re saying you’re opposed to writing this column?

MLS: I don’t think that the column – I think the column would be sending the wrong signal. If you say something is ready to ship in stage 3 you basically are telling the JS world, “okay it’s in the standard, use it”.

DE: The column would be “not ready to ship”. But not “ready to ship”. The column would say don’t ship this. The column wouldn’t say –

MLS: If the column is empty and doesn’t say don’t ship it, does that mean ship it and developers should use it?

DE: Specifically would not mean that and link a piece of documentation that explains the subtlety.

MLS: I’m not sure what signal it’s saying.

JHD: I think I would interpret it as, currently there is an ambiguity, and implementations can choose for themselves whether they ship in stage 3 or not.

MLS: Isn’t that the case right now?

JHD: There is ambiguity. What I believe DE is proposing - and this is my interpretation - is we would put an explicit block or X or whatever next to the proposals that we don’t want the ambiguity to be there for, because we think they’re not ready. When we think they’re ready, we would restore the ambiguity back to what it is now. It would be a way to prevent the danger of implementations looking at the proposal table thinking they can choose to ship in stage 3 and then shipping something that shouldn’t be shipped yet.

MLS: Okay.

JHD: Because there’s a lot of folks - a lot of implementations - who do believe that stage 3 is when things should ship and will then ship things in stage 3. This is a good way for the items that need it to try to prevent that outcome.

USA: I’d like to remind you all that we’re nearing the nearing the end.

SYG: Yeah, I agree with JHD’s interpretation. I think somebody has to ship during stage 3 to build the confidence and then we get to stage 4 after a few shipping implementations. I don’t think there is – so also noted for the standing disagreement but like that is how Chrome has operated for the past, I don’t know, decade. Like the reality is: the process document XJS aside the reality Is that due to variety of factors, I don’t want to really discuss here and get into a heated discussion about here, implementations have different shipping policies with regard to the stage process. And most of the time that difference in interpretation actual does not matter. And I think the point that DE is highlighting here is there are a few exceptional cases where it really does matter for us to coordinate correctly and given the standing disagreement. How do we best coordinate on those few exceptional cases, non-binding documentation is one way and that seems a good way to start to me.

MLS, I’m interested to hear if you think nonbinding documentation is the way to coordinate those exceptional cases like Temporal? I am not advocating changing the process document to explicitly say that no implementations ship until Stage 4.

MLS: The issue is that “non-binding” is just that. So it’s a hint, right? It’s not binding. I’m not sure what signal it sends. The real issue here I think in all of our minds and the elephant in the room is when do we want developers to use a new feature? I don’t think we want developers to use it till stage 4 unless they fully understand there is a possibility the feature may change.

MLS: I agree that we need to get some feedback from developers and I think that’s when we hide something behind a flag and let people try it out or make it available in a Canary or something like that.

DE: There’s lots of different ideas about like how the waterfall mechanism – how it should be waterfall versus prototyping in production or not and legitimate to disagree about that isn’t it?

MLS: Sure. The point I’m making is I don’t think we want to codify when we think something is ready to ship in stage 3.

YSV: So the shipping policy we have at Firefox is pretty close to what MLS described for Safari. We generally don’t ship unless we have very high confidence there will be no web compatibility issues and very unlikely to any changes at that point. This is usually reserved for simple proposals. That said, like, we are looking at an interesting situation this week where we’re going to be looking at one proposal in particular that is already shipping in one browser and two browsers have it in progress or behind the flag.

YSV: I have a couple thoughts here. Stage 3 should be functional and in some ways we have intentionally made it ambiguous as mentioned to allow multiple different strategies employed by implementations for how they gain confidence in a given proposal. And I’m wondering if that information is enough or maybe the better way to phrase my question, is it structured enough? Personally I am swayed by the idea it may be a good idea not to have not a not ready to ship column, but rather requires ‘coordinated shipping flag’ which is that for whatever reason, we are coordinating the shipping on this for later and another thing is how do we define shipping?

YSV: For example, if I push something to nightly with no flag and restricted to Firefox Nightly, that means all of the the nightly users get that and it doesn’t ride the trains [to the main stable release of Firefox]. That’s how we caught a number of our compatibility regressions because Firefox has a tendency to get more of the bugs when we’re doing testing for whatever reason. I’ll let you fill in the blanks. In addition, that’s also another question: what does it mean to ship? Shipping unflagged in a specific version of a browser. In our case, unflagged on nightly would impact both our Developer and our Nightly builds. But it would not impact the beta or our release builds and ensuring there’s stability for those two and making it safe to unship something from nightly. It does have a tendency to cause breakage in nightly. That’s not something that we do lightly. It does capture the problems.

YSV: One thing I would be curious about is whether or not we’re getting more information by shipping directly to release. That’s something that I think Chrome would have more experience with because we don’t generally do that. And, yeah, in general, I do sort of support a little bit more tightness in the definition here in particular with an eye towards what we will be discussing later this week.

PFC: I think YSV covered my point of naming the column differently. I will delete that and skip to my second point. MLS asked the question, should we be shipping Temporal because it’s in stage 3? The answer is no because we placed a condition on it entering stage 3 that we need the IETF to standardize the string notation before we can ship it. Ironically, if we had this column in the proposal repo that had a check mark in the box that said 'don’t ship Temporal yet,' that would be clearer than it is now.

DE: For a process similar to Apple’s wouldn’t it be useful, given that these browsers do ship small proposals that are at stage 3 that seem very stable, wouldn’t it be useful to have this column that might be used for those proposals?

MLS: I find it very useful for every proposal to list the known implementations of the proposal and their shipping status (nightly, canary, release X, behind flag).

YSV: Actually I did do something like this in my personal tracking of the TC39 proposal’s repo ages ago. I don’t know if anyone remembers that. I had the status where we were in shipping. It’s difficult to keep up to date but maybe we can pull ‘canIUse’ data to get information if I understood you correctly Michael.

DE: Yeah. We could enable columns for that in the proposal’s repo. Would that be something that you’re in favour of? For stage 3 and stage 4 proposals.

YSV: I think it might not solve the problem that we’re discussing here, which is – you know, we have this implicit agreement right now around Temporal but I think that the problems may be a little bit more difficult than maybe we have time for fully discussing right now. I don’t even know how much time we have left.

DE: I guess we’re out on the timebox. I’m wondering MLS, just to conclude, would a potential rename of the column like YSV suggested potentially alleviate your concerns?

MLS: What would be the exact name of the column?

YSV: If I can jump in here, I actually wouldn’t mention shipping, because I think that raises the danger of people maybe using the proposal prematurely. I would instead say “requires implementer coordination” as a checkbox and there will be some proposals that have.

MLS: Sounds good.

DE: Okay, great. I will make an iteration of the PR that I previously had that tries to capture this spirit and get your reviews I’ll fly in before landing it. Does that sound like a good conclusion, acceptable to YSV And MLS?

MLS: Sure.

YSV: Works for me. I generally support us being more precise for the cases that we’re discussing.

DE: Yeah, I’m wondering if we can call it “requires coordination”. But we can resolve that off line.

YSV: Sounds good.

SYG: It’s good.

MF: So I do support this concrete proposal going forward. But I did want to say that DE you dismissed a stage between 3 and 4 a little bit quickly in my opinion. I believe the only argument against it is that it’s heavyweight process to demote from like a stage 3.5 to stage 3. I’m not convinced by that. I do think we should keep that door open in the future in the longer term as we evolve this process.

DE: Yeah, if someone wants to make such a process proposal I wouldn’t be especially opposed to it. However I don’t think it would get consensus in the committee.

SYG: MF, I would not agree to a new stage. Let me be somewhat blunt here. I think – when we participate in standards, one of the things we give up explicit competition on is: the feature set. We all agree that we ought to eventually implement and ship the feature set we agree to for JavaScript and so we have interoperable. One of the implementations that people do compete on for is performance and stuff like that is when things ship.

SYG: So anyways I think one of the dimensions that I think you know standards ought not to prevent folks from competing on is the speed with which they implement and ship something. Like, there’s risk involved with being first shippers which I think various implementations have all experienced at this point.There’s also some rewards. And I don't want to take that dimension of competition away. For that I don’t want an extra stage for which there is to be coordinated shipping like a consensus seeking stage. Some things required coordinating shipping, we can agree on that that case by case. That’s totally fine. But as a matter of course, I don’t want a proposal to be we have to kind of flip the bit at the same time. That seems not not explicitly not something that I want. DE’s proposal to document the exceptional cases is a good start. I do believe these are exceptional cases.

DE: Whatever requires coordination that we do will have to not require sign off from all browsers at once, for example. You’re saying that’s not – we couldn’t set a bit that says now we’re in lock step mode. Is that what you’re saying?

SYG: That’s what we’re saying. We can all agree to the lockstep mode for exceptional proposals. I don’t want that to be a consensus seeking stage now unless it’s really needed. I don’t see a reason for it. I would argue against it as the consensus seeking stage and not argue against it for exceptional cases. Exceptions come up. That’s why I’m in general opposed to the new stage: I think it gets us closer to the lock step mode. I don’t think that’s the role of the standards committee. That was all.

DE: I want to ask if anybody – because many people advocated for more stages – if anybody has any further thoughts or concerns that could lead to difficulty in terms of coming to a conclusion on this topic offline?

??: Sounds like we have a consensus to modulo iterating on the concerns from YSV and MLS and SYG to work offline (on GitHub) to develop a wording and label that works.

Conclusion/Decision

There was detailed discussion about the state of the TC39 stage process. Different JavaScript implementations have different policies about which Stage 3 proposals are shipped, but all ship Stage 4 proposals, and none ship Stage 2 proposals. The TC39 stage process was deliberately written to allow these multiple policies. Overall, it’s good to note the need for additional coordination in some cases, but there should be no implication that JS implementations should ship Stage 3 proposals.

The committee agreed to adopt a shifted version of it based on a different column header (“needs coordination” rather than “not ready to ship”); the details here will be worked out on GitHub.

A procedure for multiple active supporters in committee to achieve consensus

Presenter: Daniel Ehrenberg (DE)

DE: Strengthening TC39’s consensus process. You know, we use consensus here. Let’s just review some reasons why it’s a good thing to do. First, it’s a conservative default, which means that we’re going to leave things how they are now rather than mess things up if we have any significant concern. Making a change is a big deal involving lots of implementations, lots of JavaScript developers and we want to get it right.

DE: Consensus enables certain specialized delegates to have a strong seat at the stable to preserve, for example, web compatibility and invariants and ensures no critical stakeholders are excluded.

DE: One that I really like that it represents all stakeholders here in committee as equal. Even though we have different places in the ecosystem we have procedurally an equal seat at the table.

DE: Some things require consensus like stage advancement, making a normative change to the specification, or making a statement as a committee collectively. Many other things don’t require consensus explicitly, that is important for our velocity.

DE: Right now, we achieved consensus based on, on the committee talks about a topic and the question is asked: “do we have consensus?”. And in particular, the chair is asked “does anyone object?”. Either someone objects for a reason, it has to be with a reason or otherwise it doesn’t count. Or no one objects. So possibly based on the objection could be further debate. If there are no objections there’s consensus. We record in the minutes. Hopefully with a good summary.

DE: So what I don’t like about the current system is that it focuses exclusively on hard objections.Obviously, the discussion isn’t supposed to be only about objections. The discussion is supposed to be I think it should be constructive.

DE: But at some point rubber meets the road and you have to make a decision. But even when we’re at that final stage of collecting the consensus, I think it would be good if we explicitly made space for key opinions to be noted both positive and negative that are not only ‘I block this’.

DE: And so the goals would be understanding better how strongly the committee supports a proposal and also allowing for some kind of dissent without it blocking. Sometimes it seems like making any kind of negative comment is threatening to block based on kind of the earlier convention of not constantly explicitly threatening blocks but then it makes it hard to interpret and deal with negative feedback. So most committee members will typically feel a bit positive or negative about the proposal. It’s important that we listen to most people not just people who like to block. So the proposal for the change is in the how we work document, explicitly spell out how consensus is gathered. We would ask the committee for explicit support and in my opinion we should require multiple statements of explicit support.I think that’s a pretty low bar given the kind of size what we’re doing.

DE: It could be as simple as like, you know, because sometimes we like to have these obscure procedures we could say. “I nominate this for consensus.” “I second it for consensus” among delegates. Or on the other hand, maybe that formality too much.

DE: It would be great to, you know, a brief rationale why people want to support things. Again, I think this is a really low bar. If people can’t articulate why they think something should happen and only the presenter can, does the committee really have consensus on it?

DE: I would also want to explicitly solicit non-blocking dissent and give space for this to be discussed. Because it currently feels a little too high pressure to raise concerns. This has been a problem for years; years of people being either not raising their concerns or raising their concerns and seeing them be misinterpreted for a block. Both of those things simultaneously occurred. Maybe a worse problem in the past than recently.

DE: So do we have consensus on consensus? Is this a reasonable slight change in the process for gathering consensus at the end of the TC39 topic?

WH: I like the idea of expressing concerns even if they’re non-blocking. The concern I have with this is the paragraph about objections — I don’t like the restriction on objections to be ‘only be relevant to the current stage’. In particular, this means that we cannot ever decide to not advance a proposal from stage 2 to stage 3 because we’ve come to a realization that it’s not a good idea to do the proposal at all.

DE: Yeah. Honestly about the stage 2 to stage 3 transition, I completely agree with you. Maybe this requires some iteration on the document. I feel like I was recording current practice, like, if someone is proposing something for stage 1 and they have a grammar error, it shouldn’t be a reason to not go to Stage 1 (as in the recent case with Type Annotations, where there were significant concerns raised about grammar but it went to Stage 1 anyway).

WH: I would disagree with that restriction as well.

DE: Well, I feel like we frequently have a thing where the chair says: is the objection a stage N concern?’ Anyway, I’m fine of excising this from the document. I was trying to fully document what we do now. We can leave that, you know, to be discretionary or something.

WH: I would prefer to excise that. People are going to read that text and then try to object to objections.

DE: I think that has definitely been happening over the years, objectioning to objections. But we can leave it ambiguous whether they are allowed to object to objections and then object about that. How about that?

WH: Okay.

DLM: Yes, thank you. We discussed this internally. We think this is a good idea. We support both of the points that DE has made here. I think given the size of our meetings having one or two people support something for consensus is a very low bar. I think it’s a good idea. I think it enables then perhaps if no one wants to speak up, that certainly communicates a message more than what we have Right now and I think also for the people to have gone through The trouble of presenting something for consensus I think Having some explicit support than silence is the reward for Efforts as well. Would the topic of nonblocking concerns I also think that is a Very good idea. I’m new to the committee. From what I have seen and reading through meeting notes in the Past, people voice their objections very softly for I assume Because they don’t want to be taken as a blocking objection And having something in the process to make it explicit Someone to raise a concern that shouldn’t be interpreted as Blocking is also a step in the right direction. SpiderMonkey team definitely support this is proposal.

USA: Next up we have MF and KG with explicit support. And then next there’s CM.

CM: Yeah, I wanted to add another argument in support of this, which is that we’ve shifted to entirely remote meetings or in person meetings with a greatly increased level of remote participation. When everybody is in the same room with each other it’s much easier to read things like body language – if you have a bunch of people sitting around the table and nodding or smiling or giving thumbs up, subtle cues of support are there. Whereas in a remote meeting, you have to actively assert yourself and so a lot of times when somebody says “can we have consensus?” and there’s crickets, it’s because nobody is objecting but it’s sort of not part of the norm that people who support it should speak up. I like the idea of encouraging them to do so.

MLS: I want to point out that the TC39 version of consensus is actually ‘unanimity’. One person can block or veto. Consensus according to the Oxford dictionary is “general agreement”, Webster’s has “agreement or belief”. There’s times were the committee requires unanimity such as stage advancement. There have been cases where somebody used that historically in a malicious way and blocked a proposal they just don’t like for whatever reason without really stating reasons the rest of the committee. So I wanted to be clear: we don’t actually practice consensus in our deliberations. We practice unanimity for stage advancement and if somebody is willing to block that is a veto, which we sugar coat by saying “block consensus”, and so the proposal doesn’t advance.

MLS: I do agree that we often have times where somebody proposes something and no one says anything,, yes or no. It is good to have some explicit input, I agree with what CM has to say.

DE: I want to say about unanimity, if there was a process proposal to slightly revise it while maintaining the important positive qualities that I mentioned, then I wouldn’t necessarily be opposed to that. But we don’t have any on the table and so this is kind of an attempt to improve things within the existing unanimity system that has some positive qualities.

MLS: Okay.

YSV: I just want to say that I also very strongly support explicit discussion. We have I believe it’s a failure marked for identification where we go into a mode of silence and when things aren’t being said. In a sense, it encourages and engenders further silence. There have been times where I have been a lone objector and said unfortunately I need to block and where I would get a series of messages after the fact saying thank you for blocking because I wasn’t sure about this proposal either. And I think that that means that we don’t have good enough communication about those, you know, concerns that might be objectionable early on enough that it’s only at the point that someone says we really can’t let this go forward. This probably means we may not always be achieving the best design we can.

YSV: For this reason I believe that explicit discussion of both the positives of a proposal and the negatives of a proposal without the social capital loss that occurs when we block is important because in this case people aren’t worried about am I going to put like a target on my back by saying this thing. And then blocking it? I’m just not sure yet. And I want to have more discussion about it. This is I think already a failure mode. And I think we might have seen that in a couple of proposals.

YSV: So huge +1 to explicit discussion of both positives and negatives because that means we’ll have more discussion. People who aren’t as experienced will feel safer speaking up. And that’s very important.

DE: That said, there has been some discussion inside of the Matrix room about whether or not these – what’s appropriate to block. We actually have guidelines on blocking and I think those guidelines are currently good. And it would be a lot to change them. So I think that this is more informal documentation of how we can work like this.

DE: Are those guidelines on blocking documented somewhere?

YSV: Yes. They’re documented in the process document. https://tc39.es/process-document/ under the heading “Tips for Consensus” In the tips for achieving consensus and also around like stage 1, 2, and 3 can be blocked for any reason, stage 3 to 4 has restrictions on blocking. That’s in the document

DE: Okay, perfect. So I will cross reference those rather than inventing my own thing in the text. Does that resolve your concern WH?

WH: Things went by fairly quickly, what are you proposing now?

DE: You were expressing concern, about the short text that I had, that to object to a proposal, it has to be for a reason. I had forgotten about YSV text on blocking which is now incorporated into the process document.

YSV: You remembered a couple of years ago we worked on –

WH: Yes, I remember that. So Daniel, what is your proposal now for your pull request in light of Yulia’s document?

DE: That I would replace that paragraph to cross-reference the blocking guidelines in the process document.

WH: Okay.

USA: So that seems like – that’s all the queue. Seems that you have a conclusion DE?

DE: Yeah. I do want to kind of quickly poll people how to do this procedurally? Do we want it to be one or multiple people who we ask to endorse the proposal? Do we want to have some ceremony by saying I nominate this, I second this, etc? I guess we probably shouldn’t use the emoji reaction tool. That doesn’t let anybody put the reason in. The reason is the big piece of this. So does anybody have comments on that?

KG: Yes. I don’t think we need to do the whole I nominate, I second, mostly because I don’t want to keep track of who is doing which thing. I do want there to be like, if you are explicitly in favour of advancement and want to be one of the people who is putting their names on it as being like one of the people with explicitly supporting advancement you should say that. We should say needs to be at least champion plus one other person, I forget what the exact proposal is. Whatever proposal is for how many people need to explicitly support should – – they should say I support advancement and assuming sufficient support advances.

DE: They write down the names and reasons in the notes as part of this?

KG: I don’t think they need to have reasons. It’s just presumably the reason is because you think it’s a good proposal. But I would be in favour of the names at least.

DE: Part of this is I explicitly want to solicit the reasons. Do you think that makes sense?

KG: I’m happy for there to be more discussion during this part of the process. I’m mostly happy about that so people aren’t in full agreement So everything is excellent have the space to say that. I think the people who think it is good in exactly the form it is, there’s not much more to be said about it. We have just had the champions presenting on all of the reasons it’s good. Now if you like it for a different reason than the champion, say so. If you are just like I agree with the champion, I want to support it, you don’t need to say anything more than I support advancing.

DE: Great. Can we agree on on two people as the minimum bar here? I would prefer that. There was a back and forth in the issue. Any thoughts on this?

KG: Explicitly support having at least two people explicitly support advancement.

DE: Do we have a second explicit support or (hopefully nonblocking) concerns? [NB: Whoa, this went self-referential!]

DLM: Yes, I agree with what Kevin said. I think two people is a good number and I also agree with the point that I support advancement is sufficient and someone can optionally provide a reason. I don’t think they should be required to provide a reason.

KG: We should be clear about are we saying two people in addition to the champion or champions or just two people total?

DE: Two people in addition to the champions. Champions if they count, what are we even saying this whole time?

KG: There’s a down side of that that some proposals end up with many people working on them. It seems a bit of a shame just because you have been so interested in the proposal that you are contributing to its development, then you don’t count as one of the people who supports its advancement.

DE: Yeah, that’s kind of true. But I still don’t think it’s too high of a bar to have to attract multiple nonchampions to explicitly be okay with thing.

KG: That’s fair.

YSV: I’m fine with that. I think if you’ve been involved with the proposal and try to bring it forward and having someone from the outside to Explicitly support is stronger signal than being part of a champion group and supporting it moving forward.

MM: So I think I’m just reiterating what I think we all just agreed on. I’m not sure. If someone does state reasons, I think those reasons should be in the notes unless of course requested otherwise. But I think the default is that anything that is stated verbally with semantic content goes into the notes.

DE: Yeah. I think we agree on that. And further, I think such comments are being encouraged for their special importance and should probably make it into the summary at least a brief explanation of them.

KG: So, DE, before we move on, we should give a chance for anyone who is not fully enthusiastic about this proposal to express nonblocking concerns.

DE: There are all these qualifiers, can’t you also be not enthusiastic and not have concerns?

KG: Yes. The thing you were saying: Anyone who wants to say something that is not I’m in support should give people an opportunity to do so without requiring them to be blocking this at this stage.

YSV: I’m enthusiastically in favor of this proposal, with non-blocking concerns.

YSV: If we have a large champion group like ten people championing, a given thing might be difficult to get another two people who have the full understanding of the proposal. I would say that’s actually really good. We would challenge the committee to have a better information-sharing about what a proposal is and what’s about to enter the language? I think that’s good.

YSV: My main worry is maybe it’s not enough. My other worry is that we’ll have this process in place. And then slowly we will fade back into a standard of silence. Those are my concerns but nonblocking.

DE: I think those are both pretty good concerns. Given that the push towards one, I feel like two is an okay midpoint for us to start. It will just take work from all of us to encourage committee participation and avoid the failure mode YSV cites. It will be a process, not automatic based on certain rules set.

DMP: Just a couple of non-blocking concerns, comments. This is a really complex topic for a Monday morning and I just wanted to say that I think that there are a lot of intricacies here. So I don’t have any blocking concerns but that’s my input.

DE: Feel free to join the TC39 inclusion matrix channel (#tc39-inclusion:matrix.org) or join our biweekly calls to discuss concerns here because I think ultimately we’re kind of talking about inclusion in the discussion which is very intricate and which this small process change will not resolve on its own.

Conclusion/Resolution

A version of this consensus procedure is adopted. Multiple committee members from outside of the champion group need to explicitly voice support for a proposal to achieve consensus. This can be in the course of discussion, or in a section at the end, where such support will be solicited. The chairs will also ensure there is a chance for non-blocking negative feedback to be given and discussed. We decided to avoid excessive formalities around nominating/seconding and restating things that had already been stated in earlier discussion.

The PR contains an error, where it defines a bit about what sort of objections can happen. The definition is too strict. Instead, that comment needs to be edited to point to the part of the process document which relates to consensus.

DE will update the how-we-work PR to reflect all of these changes, and it can land async in GitHub.

Reminder to audit Github delegate team

JHD: Especially with the unfortunate layoffs that have been happening, it would be great for whoever is responsible for your ECMA member company could look over the list of delegates for that company and file the appropriate issues if any of those folks are no longer employed. This of course assumes that your company has yet informed you who those people are in some way. So please keep an eye on that and let me know if you have any questions.

RP: We hope that every one who is already a delegate here is not off boarded.

Prototype pollution mitigation / Symbol.proto

Presenter: Shu-yu Guo (SYG)

SYG: Cool. This is a new proposal about some thoughts that we have at Google, actually beyond just the chrome team, collaboration With other teams at Google how we might start to address the Prototype pollution problem. This is – I’m example I don’t knowing – championing this but The credit with authorization goes to SANTIAGO and June from Google. I believe SDZ is with us and I might lean on him for Questions and details that I’m not sure of. Can you confirm that SANTIAGO is here.

SDZ: I’m here if you can hear me.

SYG: I can, thanks. Cool. We’ll get started here.

SYG: What our prototype pollution attacks, we have an intuitive Understanding of what prototype pollution attack is. I had some difficulty pinpointing exactly what we mean with Prototype pollution attack. The salient properties of the attack, at the core it is some Kind of attack that is data only. Not about injecting script or executable code. It’s like manipulating data in the objects in the application Where you can invoke some spooky action at a distance. You change object not prototype and suddenly you have made – Because of prototype inheritance and how JavaScript works you Over written and made new property visible somewhere else. It’s data only. It’s outside of the existing mitigations like CSP that are Focused on really stopping code injection. This is decidedly not injecting code. Eventually I think, you know, to chain this into into the chain Pollution to the exploit eventually runs on piece of show code Code. This part of the exploit chain is data only. And usually you might think that if it’s data only it’s just Going to lead to misbehaviour of the application and not necessarily security vulnerabilities in the application. But there are CBEs as we’ll show later that were prototype pollution results in cross scripting as well as remote execution. So to shoot myself in the foot here one of the – un, one of my contentions as representative of chrome is we’re particular Particularly about the word security and what exactly security means. And I’ve used the word security in the previous slide a bunch of times. You might be wondering why I am considering this security?

SYG: So prototypes in JS object model and inheritance stuff is just How JS works. It is exactly JS working as intended. Precisely because it’s JS working as intended, I can’t really Consider this browser security and can’t mitigate the attacks We’re seeing in the web applications because that’s not how JS works. We have to allow prototype inheritance to work and by that prototype pollution works. But because we’re seeing the negative consequences in the wild, we’re thinking about how to mitigate this and possibly change some course semantics as I will go into later how to mitigate the issues. I will be precise and call it security. The end result is bad for the applications and bad for the JS ecosystem. Despite not being security in the browser sense. So I’m going to start this off with a somewhat complicated but real world example of Firefox sandbox escape where prototype pollution was used as a critical part of exploit chain to escape the Firefox sandbox. I have on the screen here a piece of copy pasted code. I forget what this does. It saves something. Saves the task, I guess. But it runs the privileged job script as part of UI code but The data parameter is unprivileged and it is received from Some post message thing. So taskSave is a thing that I think runs in the browser Process. In some privileged environment. And it gets its input data cloned by a postMessage. You get this origin thing out. As you noticed in the third arrow below on the bottom of the Screen, it assigns to this dot notification document using Origin and notification.id as string keys. One of the string keys could be __proto__. And that was step 2. Somewhere else there is code to save and reset the attributes Of a tab, I believe. I should have led this with Firefox codes is browser UI in JavaScript itself. This is UI code that controls the Firefox UI. So this function is also a privileged UI code that runs as Part of tab restoration. When you start a new browser window and it asks: do you want to restore where you left off without your tabs, this is part of the code that runs. And it has a parameter that has a default of a plain object. It’s an ordinary object here with object.prototype. Not object.create all it is object.prototype. In the loop below it gets the and loops through what attribute Attributes should be set for something in the data and then it does that thing. But notably data – since data is ordinary object, data (name) could access polluted enumerable properties in Object.prototype. Part 3 of this exploit chain – exploits are fun stories to Read but very complicated. I know the researchers who were smart and they can hold A lot of complexity in their head and see when things fit Together just like through five plus multiple pieces, something bad happens. The third piece here is that there is a piece of code that get Gets executed on certain events on the tabs.

SYG: onOverflow is a hook that gets called when the tab text is Too long. What you can do is piecing it all together and pollute the Prototype with overflow property with the string that turns Off the sandbox and caused tabs to be restored in the way that The tab text is too long overflow gets called and you end Upturning the sandbox off and you go on with the rest of the Exploit. So, you know, prototype pollution itself isn’t enough to exploit – to produce this sandbox escape but it was a critical part of the exploit chain is the message here. So other bugs that we know about that are caused by prototype pollution, the data only attacks, we have seen if you follow the link to the slides you can click on the links and see. There are vulnerabilities leading to XSS and prototype pollution in a bunch of libraries. At the time of writing this, I don’t think we have gotten disclosure go ahead but a Google own desktop and given a Malicious JSON object that could allow local files to be leak Leaked not yet public. And due to prototype pollution and how JS works you could kind Of side step the sanitization. At the same time you can’t really fix it as the sanitizer API Author and that’s how JS works and out of scope for you as to Sanitizer author to fix prototype pollution even though it Causes the real problems. There’s some RCEs in node and security researchers, SANTIAGO Pointed me to, there’s a bunch of, you know, white hat Researchers who have written about prototype pollution and some – I guess uptick in interest in the community about Using it for exploiting web applications. So that’s the problem and a common solution we might – kind of the first thing that many folks reach for if prototypes are The problem, all about being mutable?

SYG: What about to freeze the prototypes? We have tried this in the past. And it is difficult to apply to existing applications Especially ones that want to run off the shelf library code. The ‘override mistake’ is kind of endemic everywhere and difficult To work around. Even if you could do that and you had a completely first party Environment, apps that have polyfills that need to actually Mutate built in prototypes to polyfill missing features, that Then puts the onus on the application to find a freeze point. That is a nontrivial task and deployment concern. There’s size concerns that you have to freeze application to Find prototypes. The technique is general enough that most of the time you get The most bang for your buck to pollute the built in prototypes Like Object.prototype. If you have a large application you could also pollute the Application prototypes themselves to exploit privileges and Exploit the application. Strict mode to get non-silent breakages. Not great for DX if you have to point out the potential issue To the application developers and most interestingly which is this is a very recent CVE, I suggest folks with the interest In this things follow through is a detailed story walk through And basically some application called NodeBB, I’ve already Forgotten what it does. It is some sort of server side form software or something Where freezing the prototype would not have prevented the kind Of – the technique that they use to exploit this software Where they overrode something.constructor property via Prototype pollution or data only pollution. The point is data-only attack was – it wasn’t over riding. That’s not the point. It wasn’t overriding. It was with this kind of data only attack where you mutate Something not necessarily in the prototype that could lead you to the security vulnerability that led us to the solution that We’re proposing is not to freeze prototypes. And that CV is interesting that that is an attack that would Have been prevented by this somewhat radical change that we’re Proposing here. Not something that would have been prevented by the freezing Of prototypes. The TLDR is that at scale deployment of prototype freezing Despite it being a capability in the language we have already,

SYG: We have found at scale deployment to be impractical and can’t Use it to remove the noodle on reducing vulnerabilities here. So what we’re thinking is the starting point is can we cut off Access paths to prototypes instead? And the key observation is that prototype exploits – Prototype pollution exploits rely on unintentional paths to The prototype that the developers didn’t consider. You have three strong property keys and the combinations that Give access prototype and __proto__ and constructor. Can we cut off access? It’s important of intentional and unintentional access. I said in the previous slide is preventing unintentional Access paths. Our assumption here is that static property access via dot is A good proxy measure of intention by the application developer. If you’re actually typing (obj).prototype assume you mean the Prototype than doing something like object bracket key. All of the attacks in the wild rely on computer access i.e. Unintended access. I want to take a quick side bar. This is a core design mistake that the reason – like, the fact that we have these string property access key paths to These deep, you know, object protocol things is a design mistake, core my mistake of JS been there since day 1 and pointing me to the term that Gila Bracha coined called "stratification". That says meta-level facilities must be separated from base-level functionality. Property access is base-level thing. Prototype fiddling is met at that-level thing. To combine the two things via the same language facility like Property access is opening a can of worms of trouble as we’re Seeing right now. Ideally we would have a stratified thing and we would have Explicit reflection APIs that lets you do the prototype Fiddling but property access can’t do that. It’s too late for that. As a side bar in the future “stratification” seems like a good Property to keep for any programming language. So why do we want to solve cutting off the access to these String paths in the language? As we have given the mode the common root cause of Encapsulation breakage of data versus code we can’t solve that At user land. It’s impractical to deploy. Without language changes importantly remains outside of the Threat model of existing mitigations of a bunch of things like JS prototypes are just how JS works. The if we don’t change just how JS works, the mitigations we Can’t really work around a core feature of the language. And it’s infeasible, for example, to taint check all data flow Flow. The sanitation is about code. And this is not about. This is about the data-only attacks. And tying into the stratification design principle I think Stratifying prototype access is high impact even by itself In the language. We have already moved in the direction and have object.prototype and had to have __proto__ for existing code. Can we do something more radical with the opt in mode that I’m About to present?

SYG: So current thinking on this, on solving this is the two parted solution. One is opt in secure mode that removes the problematic string-keyed access paths. This is opt in. It is backwards breaking. And at the same time add new reflection APIs and what those reflection APIs look like is totally in the air for pending discussions. Maybe they could be new symbols and maybe could be reflect dot whatever. The idea is we want – you know, we don’t want to take away the capability of prototype fiddling but take away the unintentional really easy to accidentally get wrong capability. So the secure mode which is not a great name but we need to have something to be able to discuss. The whole point of this secure mode is cut off string-based property access when opted into. And there’s two main options on which path to cut. We can cut off __proto__ and prototype or __proto__ and constructor. And how do we opt in?

SYG: This is a pretty big design question that we have – we’re not presenting a concrete recommendation here during stage 1 but There’s many possible things to opt – possible ways to opt into this secure mode. Could be out of band. It could be a use directive and could be a function call and Could be more radically an automatic thing to opt in when you use the new prototype reflection APIs. To the second bullet point to access the prototype. For the bracket prototype that is __proto__ we have Object.getPrototypetypeOf and Object.setPrototypeOf. For constructor prototypes, what are – I mean dot prototype In ES 5 classes and make a function and do dot prototype and Add methods for it. For constructor prototypes can have any number of new Reflection APIs to help those use cases. Could be a new symbol. Could be a new reflection API, but the point is that either of Having either option of new symbols or new methods raised the Bar to exploit the existing kind of – raises the bar to Exploit via prototype solution as we see the current exploits Are doing. You need to change more bucks together to actually get the Same effect as we do today. And the built ins like the prototype from constructor AO would Be constructor prototypes instead of the string property Access keys. We would like to think this opt in mode could be made secure by default by maybe doing some kind of automatic rewriting. So instead of forcing everybody to adopt these reflection APIs manually, you could imagine if we start with the premise that dot access, so static property access is a good proxy measure of intentionality we can find all static Property accesses of the existing string access paths and then Rewrite them using – to use the reflection APIs and this Automatic rewriting could be done by tools and transpilers in the tool chain and more ambitiously could be done by JS engine engines. This is a pretty speculative thing. But we’re kind of thinking a little bit ahead of how you might deploy this at scale because the problem with existing mitigations is that they cannot be deployed at scale. The easiest way to deploy at scale to magically have it is Have the JS engines do this automatically rewriting at parse time. This is empirical question which is how much code out there Actually depends on computer property access to the prototypes Prototypes?

SYG: Our hunch is like little to none. But as with any questions about the JS ecosystem and the web Is kind of hard to just get hard data on. We just don’t know. But looking at the exploits and looking at existing code bases I believe that SDZ has done some work looking at – his Team has done work looking at internal Google JS, they don’t Depend on computed property access to the prototypes. So that is the presentation. And that is basically the problem statement and the germ of an Idea here. If I could concisely summarize the problem statement on stage 1 is an attempt to solve prototype mitigation via opt-in backward breaking semantics change. And the current for us is this new mode and this new mode Which cuts off the string access paths. And with that, I will open the queue.

RPR: Before we go to the queue, I think Anthony Bullard has a suggestion In the chat for the transcriber.

RPR: I think we can move on. The first question is from JHD.

JHD: Yeah, I mean, all the examples in your slides if you opt into a secure mode, you have to know to do that - and if you know to do that, then you also salt your keys, or use Object.create(null) or { __proto__ null }, or use a Map or something. Unless you turn on the mode by default, I don’t think it would really achieve any of the goals you want. node, for example, already has a flag that lets you remove the __proto__ accessor and you can run it with that - but lots of arbitrary modules in the ecosystem rely on the functionality. I’m incredibly confident that trying to do this by default would break the web in sufficient quantities that it wouldn’t be viable. I don’t see a lot of value in if it’s required to be opt in, that said, obviously the exploration area is great. Even though the number of prototype pollution attacks that turn into real exploits is nonzero, I think it’s small, but still worth addressing.I feel like the biggest benefit would be removing a bunch of false positive CVEs from the ecosystem that cost a lot of developers’ time. But either way, I mean, I think it’s worth exploring - that’s a stage 1 concern - but I wanted to share my skepticism.

SYG: Noted. I want to lean on SDZ to provide a more detailed answer here. But I want to respond first to this node flag thing. So our hunch is that we’re not saying we’re going to remove __proto__ entirely. The idea is that this is a two-parted approach where we realize having .property access to proto to .prototype to constructor to keep it working. The way we propose that is with automatic rewriting so we don’t have to manually migrate the entire code base. The other thing about using none prototype objects I think that speaks to the at scale deployment thing. If you had the luxury of time and whatever to basically re rewrite your whole world, then yes you could just never use prototype inheritance at all. That seems a challenge in itself. But at the very least we want to use third party libraries, you can’t really do that. As an application you could opt in the mode. With the automatic rewriting you get the benefits for free. We share your concern. Without the automatic rewriting step that that pure opt in will be difficult to get deploying and working. SDZ, do you have anything to add here?

SDZ: Yeah. I want to speak up about the idea of using create null or the literal prototype null as an integration for this. I think it’s important to understand why we think that doesn’t work. We did a few experiments with this. We found a few problems with it. So the first one is you might create an object (inaudible) that doesn’t have any prototype and you think that is secure until some function at to the object might be array or number or string or maybe another object and now that has a prototype prototype, right? What you’re doing is essentially moving the goalpost one level deeper, right? And you really don’t have a way of creating let’s say a string with no prototype or a number with no prototype or array with no prototype. All of which could be polluted if they went into a common practice function. This is code and only protecting one object apart from sort of the issues that you would have in deploying it that is sort of find everywhere where I have an object literal and replace it with this which is granted sort of something that you can do and with the person speaking and saying if you’re willing to do that you are willing to do (inaudible) but I think those would be the strongest reasons why that solution is not good enough.

RBR: Thanks. Let’s see next on the queue is Waldemar.

WH: I objected to the inclusion of __proto__ in the language in the first place. I didn’t win that battle. So I fully support this for __proto__.

WH: For prototype, the presentation does not explain what the problem is. If I create a new object and happen to create a property named prototype, what goes wrong?

SYG: I’m not sure I understand the question. If you create an object and you have a property named ‘prototype’.

WH: This presentation is about manipulating user data objects. If you mess with __proto__ you can create all kinds of mayhem. If you just add a property named prototype, what goes wrong?

SYG: I think on the ordinary object, it doesn’t. But if you get to object.prototype via computer property access, now you can control – now you can make properties manifest on property inheritance on other objects is my understanding.

WH: Those are different categories.

SYG: It’s also not just Object.prototype. If you have an application defined prototype on some constructor function. If you can get to that, you can also cause distance to exploit the application object.

WH: If you can write to that then you’ve lost already.

SYG: If we could say and we already lost, let’s not do anything here, and proposing and not always lost and maybe something to do here even if you have access to that. A lot of access is unintentional by the computer property access.

WH: The problem is you don’t have just prototype to mess with. The presentation seems to be contrived in only looking at __proto__, prototype, and constructor. But the stratification problems reach much further. There are all kinds of things to mess with such as then and toString.

SYG: That’s a fair point.

WH: Is fixing the stratification problem in the scope?

SYG: It is not in the scope of this proposal to stop all monkey patching behaviour that can cause issues in JS, but looking at what kind of exploits folks attackers are doing in the wild, like going by that data fixing prototypes and constructor and access to stuff via constructor and prototype will have impact on the security of the ecosystem. Despite there being other monkey patch things. SDZ has more to say about the –

SDZ: If I can add something. I agree that the line is blurry between any logic that you might have an actual prototype pollution. I think that’s why we approached the problem where from the point of view of action at a distance. If you have a bug that allows you to have properties on this single object instance, that categorically belongs to a different class of bugs than the one where you can affect all the objects at one time. In order to do that you need to go up the prototype chain and might do that through, you know, the prototype properties or through the constructor. There seems to be a really important distinction between proto and let’s say the constructor than just in the defining a new ‘then’ function. Maybe that’s a good way to draw the line on where the problem is.

WH: I still don’t see the distinction. Both of them are about stratification – a lot of things rely on toString. If you manage to mess with toString you can cause a lot of the same problems.

RPR: Move to mark now?

MM: First of all, I just want to express my appreciation. I’m glad Google is looking at these issues. I think that there’s a lot of potential synergy between what you’re talking about and the work we have been doing on ‘Hardened JavaScript’ as we’re now calling it. I want to mention, though, that there’s several other standards that I want to bring to your attention and to the committee’s attention which is the work that we’ve previously labeled – used the term security for satisfies all of your justifications for introducing the term security and Google has been the one who has been objecting to that. And I think that that objection actually has been productive in that it forced us to clarify by dividing security into integrity, availability, and confidentiality, the traditional separation. Your proposal, like our proposals, is primarily about integrity. I don’t see that there’s anything here addressing confidentiality or availability. So I would like you to adopt the same on what you’re proposing as you’ve been insisting on other proposals. But in any case, welcome to this set of problems. A lot of these problems and a lot of the solutions fit very well, are aligned very well with what we have been doing in hardened JavaScript (inaudible) hardened JavaScript than SES because SES stands for SecureEcmaScript and the term secure is what we’re avoiding for these reasons. We do something you’re probably not – can you go back to the slide about freezing prototypes.

RBR: Mark, I will say your audio is occasionally cutting out. We can still understand you but it’s just fairly regular cut outs.

MM: Is this better?

RBR: Probably, yes.

MM: Okay. So let’s go through each of these bullets and thank you for enumerating them. But first I want to make a general point which is the way “Hardened JavaScript” now works is the first phase is a repair phase that changes the environment in many ways including neutering certain constructor properties, turns it into an environment that safely removes a lot of sharp knives but does not – but after that repair, there’s then the ability to run vetted shims in the repaired environment that has not yet been locked down. The locked down is something that can happen after running vet vetted shims and it’s the lock down that freezes all the primordial prototypes. All of the intrinsics. So the polyfills need to find a freeze point, yes, that is a consequence of our way of doing things which is pollution is possible before the lockdown and therefore the shims, the poly polyfills vetted during that stage have to be non-malicious. The override mistake since both of us are proposing a new mode and I think it should be merged into one new mode, the committee I’m sure doesn’t have appetite for two modes both addressing overlapping security problems, as long as we’re introducing the mode, in that mode the override mistake could be absent. The file size concerns with freezing all apt-defined prototype prototypes so we have a nice helper function that we use call called harden that does a transitive walk of names and prototypes and freezes everything it finds and adds it to an internal WeakSet of everything hardened to avoid duplicating work. We found that very good for apps protecting their own prototypes. Lockdown only protects the built-in prototypes. Having those hardened calls inside the modules that export the things that they’re hardening, export them in a hardened manner, that – those hardens don’t run until that module is loaded. Anything trying to address file size concerns with the application that contains many, many modules has to somehow postpone initializing most of those modules up front anyway. Need strict mode to get non-silent breakage. The hardened JavaScript enforces strict mode. One of the repairs that happens in that initial repair phase is that all of the evaluators now evaluate only strict code. It’s not possible to introduce new sloppy code after the repair phase. Even before the lock lock lockdown phase. And the at scale – I tried looking at the CVE, to the extent that we have been able to figure out – and it’s not an easy CVE to read. So I think I will postpone further comments on that. There’s a clarification I see on the screen (node on slide 9).

SYG: Summary of very high level of why we think –

MM: How does it cause – I can see how a data – how bad data on fallible code but nonmalicious code can cause object assign assigned to be looked up. How does it cause it to be invoked?

??: I think it changes it –

SDZ: Yeah. This is a feature that the application provides. You give a piece of data and the function to operate on the data. But I think the developers are expecting is that this class that provides this feature, only the functions in the class will be allowed to be called. That is a reasonable expectation, right, as a developer. The problem is that you can escape that by using the constructor in this case and referencing a sign and completely changing the state of the feature that was reasonably built.

MM: So I did not understand that explanation. I’m going to pass on trying to understand this in real time during my comment. And let’s resume this off-line. But the most important issue is the at-scale deployment in practical issue and what we have found is aside from shims, polyfills, a tremendous amount of existing code that is not written to be compatible with ‘Hardened JS’, we have found to be compatible with ‘Hardened JS’ and we’re familiar with transitive sucks in with other packages that you didn’t know about and aren’t aware of, that’s happening to us too and generally we’ve been surviving because of a tremendous amount of existing code is compatible. I would say easily 90% of the incompatibilities that we have found is due to the override mistake. So if a new engine mode simply suppresses the override mistake, then almost all of the incompatibilities that we found in general go away – found in practice go away. In any case, I want to conclude by saying I think we should join forces. I think these are very aligned efforts. I’d like to volunteer to be a co-champion on your proposal. And I would invite you to present to the weekly SES meeting that many other members have discussed proposals at. I think they can all vouch for the discussions having been very productive.

RPR: All right. Thank you mark. We seem to have quite a lot of messages of support. Given that we have got 12 minutes remaining, please just be aware that we have quite a few on the queue. Jack.

JWK: Hi. I support the motivation of this proposal, but adopting if it’s really viable for the current solution proposed. Okay, I’m done.

SYG: Can I ask a clarification for folks on the queue who have doubt and skepticism about the viability, yes, I mean we also have as champions and authors of the proposal also have skepticism. But I consider it to be a somewhat empirical question. Not just we have convinced ourselves that the other solutions don’t work and this is the something that will work. It’s that given by other attempts internally at trying to fix this issue at least for Google problems, this seems like a realistic way forward. And there will be actual experiments performed and stuff like that. Not like – for folks and other delegates with skepticism, are you saying let’s do wait and see with empirical evidence or saying I don’t know if it would work but it would be great if it does?

JWK: I’m the latter one.

RPR: DLM.

DLM: Yes, thank you. I will point out this has been a problem in Firefox UI code in the past. We have implemented our own hardening internally to try to mitigate this in the future. This is definitely an area that we’re interested in. And I think it is worth exploring. That’s it. Thank you.

RPR: Thank you. And Philip.

PFC: I support this going to stage 1. I think wherever there’s an unintended effect of the status quo, that's new information. It’s worth evaluating whether that changes some of the tradeoffs we made in the past. I think this is a good idea to investigate.

RPR: Justin.

JRL: Absolutely love this if we could disable the computed property access where the key is possibly __proto__. All of the exploits that I have ever seen have been unintentional accesses to the dudner proto when you don’t know what the key is because of some user value. If we prevent – if we can change the behavior so that computed access is one behavior that prevents access and intentional dot proto access allows you to set manipulations on the prototype, I think this could be web compatible. I would love to see it work.

MAH: I have a quick reply. I have seen pollution with proto and dot constructor name access. So dot constructor, dot prototype things like that. Just preventing proto will not be enough.

KG: I agree preventing ‘proto’ will not secure all problems but some. Still worth investigating.

RPR: Duncan.

DMM: I think it looks very encouraging. I think it’s worth checking languages that use JavaScript as a target to allow you to do dynamic things. Might even be worth approaching those compiler writers if we find problems and getting them to move away from computed property access any way even if we don’t adopt this proposal proposal.

RPR: And Dan.

DE: This is a really interesting proposal. When I heard about it, it is a little ad hoc. Now that we see this kind of preponderance of vulnerabilities I think it’s important to do in this space. I think it’s important to prioritise mitigation based on both how exploited they are in practice. We can see this is fairly exploited in practice as well as how simple and contained the mitigation is where this is a pretty simple and contained mitigation. So earlier discussion about whether this is a double standard. I think it makes sense for us to bring this proposal to stage 1 because it scores pretty high on on those metrics.

RPR: Mark.

MM: Yeah, so since I was the one who mentioned double standard, I was – that was with regard to the imposed constraints from Google on other projects versus what Google feels free to engage in their own projects. I don’t think that’s a substantive issue. I think that’s a cosmetic issue. I fully support stage 1 for this proposal. Shu, I look forward to working with you on it and to exploring the relationship between this and ‘Hardened JavaScript’.

SYG: Thanks mark. So, for the security thing, yes, this has been a point of discussion in the past. Now, what exactly means by security, –

MM: Would you accept the clarification that all of the security that you’re talking about falls in the area of integrity, not confidentiality or availability?

SYG: Absolutely. You said it much more precisely than what I meant by I don’t consider browser security.

MM: Okay, good.

DE: Yeah, I also want to say I’m really glad that SYG is pursuing this within TC39 because although one might consider such a mode to be permitted by implementations under Allen’s reading of the section about permitted and prohibited extensions, I think it’s really important that if such a thing is shipped widely in JavaScript implementations that it be shipped kind of compatibility and with this kind of broad input. I’m glad that we can have that discussion here. Just for CSP.

SYG: For sure. I think the ambition here is at stage 1 any way before we hit the trough of disappointment is not just we fix things for chrome users but not even just for web users but given that we see CVEs coming out of the service side and we also fix it for the ecosystem as a whole if possible.

RPR: Waldemar.

WH: I support this for stage 1 but I do have concerns about breaking invariants such as . access being the same as [] access and about the limited scope of what stratification identifier problems this is addressing. One concern might be that exploits are currently using prototype just because that’s the easiest thing to reach for, but if we disallow access to prototype, they might do the same thing via other magic property names.

SYG: That would be fascinating to learn one way or another if there are. But part of the security kind of calculus, I guess, is security is always a moving target. So I guess if we buy some time with this because it has to – because the exploit folks, exploit writers now have to use more bugs to spend more time to find bugs, that is a win. Now, that’s not like a categorical we completely plug this whole kind of win but I think security folks would still consider that a win. SDZ go ahead.

SDZ: Yeah, I – you know, I appreciate the concern and something that we have discussed a lot to what extent can we really artificially draw the line and say that you have to go through for prototypes or constructors to classify for the vulnerable vulnerability and say we are looking for some ways in which to experiment or get some empirical evidence to support or reject the sort of the hypothesis that Waldemar is proposing and researchers and grant and allowing them to look for prototype bugs that don’t go through the prototype explicitly. This is maybe a little bit on the side lines of this particular proposal. I’m hoping that we can get some empirical evidence of that and act accordingly. We’re looking for other ways in which we can do that. So far I think we have a few ideas and we will try them.

SYG: Cool. I think we’re on time with one minute to spare. Good job in the queue everybody. I’ll go back to the ask for stage 1.

RPR: Okay. Are there any objections to stage 1?

DE: Should we apply the new process, like, first about people supporting and having concerns?

MM: I’ve already voiced both my support and my concerns during the QA, is that adequate or should I reiterate it now?

WH: I did too, Daniel.

DE: That’s right. I was picturing this would happen during the completion section. There’s no need to repeat.

RPR: I think we had quite a lot of message of support. If anyone took themselves off the queue with their support message, now would be a good time to bring it back.

DE: Should we record the support messages like in the summary of the conclusion so that people don’t have to read through the whole set of notes? I mean, even just the supporters?

RPR: Okay. If we’ve got a – if people want to go in the queue, we can read off a list of supporters.

??: I don’t have to formalize this too much.

??: ABU and JRL both given explicit support. We have qualified for our two supporters and more, I think. And Daniel is on on the queue as well and hax. We have exceeding amounts of support. And I haven’t heard any objections. So congratulations Shu, you have stage 1.

??: Thank you very much.

??: Thank you every one.

Conclusion/Decision

Stage 1

Reducing Annex B monkey-patching

Presenter: Kevin Gibbons (KG)

KG: Since it is a broad change figured it should be brought before committee and not for consensus but at least for feedback. So for background, Annex B contains all of the – well, not all of this but at least a large number of the horrible mistakes that we wish we could get away from that are optional for implementations that are not web browsers. Unfortunately we can’t get rid of them in web browsers because they are web reality. That’s why they’re in the specification in the first place because there’s too much code out there relying on them. As I am sure anyone who has worked with the spec very long is aware, the fact that Annex B is factored out into its own section is a cause of endless trouble just a huge amount of wasted time as people read the spec and think that the spec is describing the language and then find out that actually it’s not. That to get the actual description of the semantics that is required on the web and for any tools that works with JavaScript that goes on the web you have to read the main specification and then observe the places where there’s a note that says there’s some step that is different and then go find that step and sort of mentally splice it into the original algorithm and figure out what the full algorithm actually is. This is a huge pain. It’s just so much wasted time doing this. And makes the spec a lot harder to read. So a while ago we discussed getting rid of annex B entirely and there’s been some push back on that particularly around the grammar parts of Annex B because, for example, the HTML comment and the ?? grammar are ambiguous and that is something that we try to avoid. So the benefits of moving the grammar parts into the main spec are at least not entirely clear cut. This is a PR which would move the non-grammar parts of Annex B into the main spec and not make them normative and change the status of anything but make it instead of having a note that says this step is replaced there would be an inline step that says something to the effect of and the wording is something we might word with word Smith later but something to the effect of if the host is web browser or otherwise supports this feature, then do the Annex B thing. Sometimes there would be an else that would be to do the non-Annex B thing. So you can see what the actual algorithm is in line with the rest of the algorithm. As you can possibly tell I am wildly enthusiastic about this change. But I would like to get feedback from other committee members particularly other people who are reading the spec regularly before we go forward with this. Does that all make sense before we get to feedback, I would like to make sure we clarified what we are doing or proposing to do.

RPR: Is there any feedback on whether we all understand this? There’s a question from WH.

WH: What is affected?

KG: Yeah. So like I said not the grammar parts but pretty much everything else that is in Annex B leaves out – let me see precisely. Leaves out all of B.1 additional syntax and leaves out B.33 function declarations and statement clauses and B.35 initializes for…in statement heads. Some of the additional things, this is the HTML DDA slot and the thing of var declaration and catch classes are in the Lux call binding and the attached variable labeled function declarations and our old nemesis NX 333 and 32 that is the block one. It is open #2952 if you like. More precise answer. It looks like a lot of text but it is readable. It is moving stuff around and go to the top of the section and see which step is being changed. Not each section. Go to the top of each part of the DIF and see which part is being changed.

RPR: Does that answer your question Waldemar?

WH: The polarity is unclear. Are you saying this is or is not integrating the bizarre function scope?

KG: That would be integrated. The bizarre function scope would be integrated unfortunately.

WH: I object to that being integrated. Things like the toBoolean and comparison semantic changes are fine.

KG: Why do you object to this being integrated? I can say personally I lost answers way back when I was start started out as the wee consumer of the specification. I personally lost hours. At least one other member of the team lost hours to the fact this wasn’t integrated. I would really like to integrate that part.

WH: Yeah, that falls into the category of syntax.

KG: No, it doesn’t. Did you tell change any of the legal syntax at all.

WH: I’m using “syntax” in the more general sense as something that affects lexical scoping behavior. That is very different from what happens when you pass an implementation object to toBoolean.

KG: I agree that it does affect the semantics. On the other hand, this does not have any of the issues with ambiguity that we raised when we previously discussed moving every part of Annex B to the main specification. I don’t know why the fact that it affects the semantics in a very core way is relevant to whether or not the algorithm gets inlined with this condition that says this only applies to things that support web browser features.

WH: I’m just not comfortable with this.

KG: Okay. I really, really want to do this.

RPR: Looking at the queue, we have one, two, three, four, five, six, seven strong plus ones including with all of my hats, yes yes, please. So there’s very strong support here.

SYG: Very narrow on the lexical scoping is there. You have to squint to see lexical scoping. At the end of the day it is an environment that is not specified. That is like an execution time, evaluation time chain of name to value maps.

RPR: Okay. Just looking through all of the plus ones and so on, DE, I see you’re questioning whether this ought to be consensus seek seeking.

DE: I don’t know if we could skip the queue to me but still these are editorial questions, nobody is talking about making any of these Annex B things normative. I strongly support this proposal and the proposal that I brought about bringing all of Annex B inline and I think that this is sort of editorial matter is for the editors to decide.

RPR: I see PFC saying the same thing.

KG: Yeah, to be clear, I wasn’t asking for consensus for the change. I’m just requesting feedback. And I hear WH’s concern. Although I don’t fully understand the source of the concern. I also hear a lot of support from everyone else. So I think as editors we are likely to go forward with this unless there is some more concrete objection that we can try to act on.

RPR: We’re at the end of time now. I don’t know WH, do you want to follow up with KG on this?

WH: Sounds like he will go ahead anyway.

KG: If you have a concrete reason or some change that you think we could make that would make you more comfortable with it, I would be happy to continue talking about it.

WH: I don’t want to stand up to a whole queue of people saying plus ones to do this.

KG: Fair enough.

RPR: Thank you WH

RPR: Okay, then, so we’re at time. And we have – for the notes at least we have plus ones from Dan, Shu, Duncan, mark.

MM: I would like to briefly respond to DE’s meta point about whether this should have been brought to committee and seek consensus. I’m not in favor of changing the rules but I think that re refactoring of this magnitude should come to committee and seek consensus, strong should. I think it would be rude to do this without bringing it to committee first.

KG: I don’t think it’s bad to bring it to committee. I don’t know if it’s something we should commit blocks with a huge amount of support and the fact it’s editorial.

RPR: This all makes sense. Thank you.

Conclusion/Decision

  • Editors will make this change, with WH's objection noted.

Symbols as WeakMap keys

Presenter: Ashley Claymore (ACE)

ACE: This is Symbols as WeakMap keys. If all goes well, towards the end I will be asking for Stage 4. So this is a bit of context for people that might need it, so as far as I could tell, this goes back to at least GCP’s issue on ECMA-262 (issue #2038) back in 2018 saying "why can’t we use symbols as WeakMap keys?". And this issue alone minus all the proposal things has a lot of comments on it. So lots of fun things to read. And that is what this proposal addresses. It says, “yes”, you can use some symbols as WeakMap keys and not just WeakMap keys but also WeakSet entries and a WeakRef target and also the target and token of FinalizationRegistry. So the whole family of weak and garbage collection related APIs. So in terms of the spec there’s no new APIs per-say, it’s just changing things that were previously a TypeError to no longer be a TypeError, and that is the observable change. So a big part of this proposal is discussing “which symbols?” and the answer is: all symbols except for those that have been returned from Symbol.for, a.k.a. ‘registered symbols’. They are not allowed. All other symbols are, whether that’s a good idea or not. So we reached stage 3 back in June. The PR is open to ECMA-262 and just to note that it hasn’t had a editor approval on that yet but seems like it’s just final editorial tweaking, not normative. SYG left good comments and I have updated the PR after those. I’m not 100% sure on the policy here of – I know Stage 4 requires editor signoff, so I guess I would like to ask for Stage 4 modulo editor review. I think the PR is 100% of the way there in terms of normative and very clone on editorial changes. We have the test262 tests merged. Thank you to PFC for writing those. That was massively appreciated. We also have two implementations, one V8 and one in JavaScriptCore.

ACE: With that I would like to move on asking for stage 4 with explicit support from at least two people as well.

USA: JHD registered explicit support on the queue.

ACE: Thank you JHD.

DMM: I think this is a good idea and it solves the case for symbols. But that’s not the only question that has come up, template literals also seem to be questions about WeakMaps and WeakRefs. So while I think this is fine to take to stage 4, I would be interested in whether we could develop a common language and framework in the spec for specifying lifetimes of all objects that might be considered tied to the position in the code.

ACE: Yeah, on that other people will be better placed to speak on this. But I think there was a recent, last year maybe even a few months ago, change to the definition of identity in the spec which covers a lot of things. I’m trying to remember now if it explicitly calls out the actual template registry. I think SYG is probably the best to answer those questions.

SYG: My personal opinions here may not reflect the editor group opinion. When we talk about identity, I think it’s best to stick to the text as we have written it. There’s different ways to kind of frame when we talk about identity and forgeability and stuff like that. I think what DMM is talking about is caring about the forgeable nature but, yes, that is the thing. It would be nice to record like the design intention around that somewhere. I’m not sure we have all the tools within the spec yet to kind of really accurately record it.

DMM: Yeah, I think within some other language specs like JVMs, it’s much easier to build up the framework of explaining why objects persist, because we have class loaders and classes and constants in static field but we don’t quite have that in the ECMAScript specification, and that makes it harder to reason about.

SYG: We have the notion of livenes. Java talking about reachability that explicit we don’t want to do because reachability excludes what we already do and reasonable. Liveness is very difficult to kind of – we can give a semantic definition of liveness that we tried to do with the spec. It doesn’t help you build intuitive understanding. I think what you’re asking for is better intuition.

DMM: Yeah. I think it would be useful for people to be able to build a good intuition of when objects like symbols and templates literals can be used as weak keys and when they can be collected without specifying it specifically for symbols or template literals, because we might invent another object down the line and it would be quite good to have it fit into the existing framework for this.

USA: All right. Next up on the queue is RMS with support and thanks everyone for your explicit support.

ACE: Thanks RMS.

USA: ACE, congratulations on stage 4. I suppose that’s all for the presentation.

Conclusion/Decision

  • Stage 4
  • Support from JHD and RMS

JSON.parse source text access

Presenter:Richard Gibson (RGN)

RGN: So this is an update on JSON.parse source text access. Hoping to get through it relatively quickly. I will just jump right in. Background first: We have a lossiness problem with JSON.parse, for example arbitrarily precise sequences of digits in the source are parsed into Numbers instances and even though revive functions exist and can interact with the parsed values, they don’t have access to the source and so it’s already lossy. If I want to represent this sequence of nines as perfectly accurate BigInt, I can’t achieve that with the current functionality available. As a related problem, revivers lack context. So if you want to transform only form part of a data structure, you’re left to figure out for yourself what any particular invocation relates to. It’s really easy to confuse for instance a string that looks like a special data type with the actual data type itself and lack of that context causes problems and type confusion.

RPR: Sorry, RGC. Point of order that we need someone to help with the notetaking.

RGN: Good to proceed?

RPR: Please do.

RGN: On the opposite side, during serialization there’s no way to produce something like a sequence of digits maintaining full precision because any string you emit gets wrapped in double quotes and any JSON-incompatible data type gets rejected. So this is the problem we’re trying to address. Specifically what we’re thinking about doing is exposing the source for primitive values to reviver functions on the way in such that you have a new destructurable parameter here which you can use to discover the raw source text and if you want to convert that to a perfectly precise BigInt you can do so. Likewise, when serializing you can construct an arbitrary primitive JSON text representation for your value, such as representing a BigInt value as a long string of digits. We’re coming to the end of the current stage, and there’s only a small number of open issues. Most importantly a pair of them: number 35 and 39 regarding assertion failure from forward modification. It turns out the JSON.parse reviver function is invoked with a receiver that is the object containing the value being visited and it is possible to do things like modify elements of an array or members of an object that have not yet been visited. You can replace them, you can remove them completely, you can put whatever you want in their place and a later invocation of the reviver will see the modified value. We have a PR up in the proposal which I’m expecting to merge in the next couple of days where basically in order to prevent this from becoming a problem, the whole data structure after JSON.parse but before any reviver invocation is snapshotted when it’s time to invoke the reviver for a particular node in the tree, we know if it’s been modified. In the case it has been modified we basically suppress representation of the source text, which is also incidentally what we do for all nonprimitive values anyway… basically, you’re still allowed to perform these kinds of manipulations but when you do so are inherently mutating the structure away from the source text used to provide it and therefore the callbacks don’t get access to the stale source text. A couple other issues were regarding the scope of the proposal. One for whether or not to expose ancestor keys in the reviver and one for whether or not to expose position. Both of these are currently off the table barring identification of a significant use case. They both have some comments and if people do feel strongly about including them, it’s easy to do. But the guidance from earlier presentations of this proposal is basically to keep it limited in scope. It would be possible to add this functionality in a follow on proposal if there is in fact sufficient demand and sufficient utility. So the current status is basically that of these three logical open issues, the latter two are resolved won’t fix and the first has a pending fix in place. So what comes next is writing test262 tests and helping implementations progress now that they will be unblocked on the forward modification issue, and probably in March requesting advancing advancement to stage 4 if all goes well. That’s basically where things sit. Just wanted to make sure people knew the proposal is not sitting idle and should be advancing soon. With that I’m ready for the queue.

EAO: Just noting first of all I like the general direction of this. I think this is a thing that would be nice to fix. Observing that when reading through this finding the rational rationalization of the JSON.rawJSON method was hard and need needing to go through chat logs and being that clearly is nice and nothing more than that.

RGN: In the explainer you mean?

EAO: Somewhere other than needing to find a link to the chat log logs that happen to be discussing this.

EAO: Not in the spec text but more clearly. Just for all interested.

RGN: I can totally do that, thank you.

RPR: And then DLM.

DLM: Yes. Happy to hear that your plan for Number 11 is won’t fix. We discussed that internally. We had some concerns about the over head of this regards to like the utility of the use case. So, yeah, we would be happy to have that won’t fix or investigated in the future as opposed to part of this proposal proposal. That’s it. Thank you.

SYG: About the fix for the snapshotting what is the – sorry for not really understanding the details of the semantics here here. Trying to understand what is the – like, is it okay for resource consumption? Like, what is the thing that you’re creating as part of the snapshot? Like a skeleton object that tracks the change to each key?

RGN: In the syntax itself that is not binding on implementations other than the observable behaviour, we’re constructing this concept of JSON parse record which is a reference to the parse node, a reference to the key and value derived from it, and then its child elements or entries in the case of a non-primitive value. And then JSON.parse recursively creates a full skeleton of them before invoking user code.

SYG: You snapshot the whole name, not just the current thing you’re reviving, it’s the whole object?

RGN: It has to be the whole thing, because you could do deep forward modification at the top level.

SYG: Okay. So like we have to fix this because otherwise the current implementation just crash because there’s actual assertion failure. My concern is – I haven’t looked at it and looked at it and implemented it and I don’t know some clever way to do this other than to do this kind of – have a structural structure, sorry, having a skeleton object of the exact same structure of the output to do this kind of bit tracking for the modification bit. That is going to increase the peak memory use during reviving and if workloads that currently work with reviveers suddenly – because we have to do the tracking now because it’s not like we can – the source text access isn’t opt in. It’s like you always get it now with the reviver. That is okay because that was the slow path. I think that’s still okay. But like every kind of thing we adhere increasing the likelihood of if it – doing the intermediate things so the peak memory usage is higher, are we going to get issues of stuff that used to work suddenly implementations can’t handle anymore with the same memory requirement? I hope that doesn’t happen. But, yeah, I don’t know. People pass JSON things around. I don’t know how often they’re used with revivers.

RGN: I think that’s a fair characterization. As pointed out it’s already in the slow path but if the implementation is doing this, any time a reviver is present, memory usage will increase and it is absolutely conceivable that in some cases that will push something over the edge. I think that there are some optimizations that could likely constrain the cases in which that code path is exercised. But when you hit it, you hit it. And I was not able to find something so clever that it avoids a full parallel structure. I do still think this is acceptable. Because we know it is on the slow path. That possibility does exist.

SYG: Yeah. Because it’s no longer the – it’s not the runtime performance I’m worried about. It’s like now we’re getting closer to catastrophic failure with the UMS, so we’ll see.

RGN: Yep.

RPR: You have one minute remaining.

RGN: That said, the structure itself I tried to keep relatively minimal. It’s just going to be a collection of pointers rather than holding on to large data itself.

RGN: Okay. Looks like the queue is empty. Anyone want conversations off line in matrix or on the proposal repository, that would be welcome.

Conclusion/Decision

  • Merge the fix for issues #35 and #39
  • Be receptive to implementation feedback
  • Write test262 coverage
  • Be prepared for stage advancement at the next plenary

Change Array by Copy

Presenter: Ashley Claymore (ACE)

ACE: This is ‘change array by copy’. In this proposal, we add four new methods to Arrays. Three of those are the non-mutating methods counterpart to their existing mutating ones. So, reverse(), sort() and splice() get their non-mutating versions toReversed(), toSorted(), and toSpliced(). We also now have with(), which is a kind of method version of index assignment. TypedArray methods work very much like array except we decided during stage 3 that TypedArray does not need toSpliced so it was dropped. Other than that, toReversed, toSorted and with are all on TypedArrays. And that’s the proposal in a nutshell. And for a bit of fun: at least taking RPR’s Twitter followers as a sample, there were a lot of people that were very happy that we as a committee were adding these methods to array. Which was always really nice to see that people were excited about this.

ACE: We got stage 3 back in March. We have the PR to ecma-262 with editor approval. Thanks to everyone that reviewed that. Also many thanks to NRO for writing and working on the vast majority of all the test262 tests. I feel like I learned a lot about test 262 and testing JavaScript by looking at that PR. It kind of blows my mind how many edge cases we had to cover for even a relatively small proposal.

ACE: We have lots of implementations. We have JavaScriptCore and V8 and LibJS as well and there’s one in SpiderMonkey but it’s behind a build time flag currently. I believe there’s work ongoing to keep progressing that. So I’d like to see what’s on the queue.

BT: So far explicit support that don’t want to talk JHD and ABU and MM and I don’t know if that is the end of the message.

MM: I don’t need to talk. I should have indicated that. [Mark also supported the proposal in his queue item]

BT: No problem. But you did say plus one look forward to using it?

MM: Absolutely.

BT: Thanks MM

MM: Not only support but enthusiasm.

BT: Lots of explicit support. I’m not hearing any objections to moving this to stage 4. We’ll give it just a few more seconds here. PFC is another plus one: "This is going to be great."

BT: That is stage 4. Congratulations.

ACE: Thank you everyone. Really appreciate it.

Conclusion/Decision

  • Stage 4
  • Explicit support from PFC, MM, JHD, ABU