Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass Texture Coordinates to bezierVertex, quadraticVertex, and curveVertex #5722

Open
1 of 17 tasks
michaels-account opened this issue Jul 13, 2022 · 9 comments
Open
1 of 17 tasks

Comments

@michaels-account
Copy link

Increasing Access

Unsure

Most appropriate sub-area of p5.js?

  • Accessibility (Web Accessibility)
  • Build tools and processes
  • Color
  • Core/Environment/Rendering
  • Data
  • DOM
  • Events
  • Friendly error system
  • Image
  • IO (Input/Output)
  • Localization
  • Math
  • Unit Testing
  • Typography
  • Utilities
  • WebGL
  • Other (specify if possible)

Feature enhancement details

#5699

Putting this feature request in as a response to the above issue. Could act in a similar fashion to UV assignment for custom shape vertexes, adding an extra 2 parameters on the end (in WEB_GL mode) for the UV coordinates.

@davepagurek
Copy link
Contributor

Increasing Access

I can see this as a way to drop the barrier to entry for making more organic looking 3D shapes that you can render with the same tools p5 gives you for its own 3D primitives. To do this right now, you'd have to do a lot of math yourself, or learn 3D modelling software and export a model, and that has a large time cost (and potentially a monetary cost.)

Could act in a similar fashion to UV assignment for custom shape vertexes, adding an extra 2 parameters on the end (in WEB_GL mode) for the UV coordinates.

This will also need #5631 to be fixed in order to not lose the texture coordinate data that the user supplied.

For bezierVertex and quadraticVertex, I think we'll actually need UV coordinates for all of the control points (so 3 sets of UVs in bezierVertex and 2 for quadraticVertex.) Then, when we convert those to vertex calls, we'd also mix the UV values with the same weights we use for the position values. Maybe it's the same for curveVertex but I'd need to read up a bit on Catmull Rom splines first (are the curves also contained in the convex hull of the control points? If not, do we risk getting weird UVs in the interpolated regions?)

If we add a UV coordinate to the longest form of the bezierVertex function, its signature becomes this:

bezierVertex(x2, y2, z2, u2, v2, x3, y3, z3, u3, v3, x4, y4, z4, u4, v4)

This is kind of a lot to grok. It's a bigger API departure but maybe one could introduce a bezierControlPoint function, which lets you split the above into three calls:

bezierControlPoint(x2, y2, z2, u2, v2)
bezierControlPoint(x3, y3, z3, u3, v3)
bezierControlPoint(x4, y4, z4, u4, v4)

@GregStanton
Copy link
Collaborator

GregStanton commented Dec 27, 2023

Could we use the starting point and direction of a shape's perimeter to infer the texture map, instead of requiring the user to pass in texture coordinates? 1 This may improve the user experience without sacrificing much flexibility, and it’d eliminate the need for major API changes.

Having some clarity on this issue will help me to organize work on #6560. I'll explain my thinking based on the example from issue #5699 (shown below). I'm in unfamiliar territory here, so please don't hesitate to correct me :) I'll address direction and distance separately.

Direction

If the user specifies the curved shape between beginShape()/endShape() by starting from the top left and proceeding clockwise, then we could map $a$ to the top left corner of the curved shape, and when we move clockwise in $uv$-space, we could also move clockwise in $xy$-space. If the user draws their shape counterclockwise, a clockwise movement in $uv$-space could correspond to a counterclockwise movement in $xy$-space.

Distance

We might have two modes: SEGMENT and PERIMETER. 2 Maybe we could set this with a textureMap() function that would accompany the existing textureMode() and textureWrap() functions.

In SEGMENT mode, we could map each edge of the texture image to a rendered path segment (the top edge could map to the first segment, the right edge could map to the second segment, and so on). 3

In PERIMETER mode, we could ignore edges and map the entire texture perimeter to the entire shape perimeter (i.e. moving along the texture perimeter could correspond to moving along the shape perimeter by a corresponding amount).

Footnotes

  1. This is at least somewhat similar to a feature of Vectorworks, in which "The starting point and direction the wall is drawn affect how a texture is applied."

  2. In the future, we may want to support new kinds of surface primitives (e.g. we could allow the user to create Bézier triangles by using bezierVertex() together with the TRIANGLES shape kind). I haven't yet considered whether the mode names might need to be revised to accommodate those cases.

  3. If there are fewer than four path segments, multiple texture edges could map to the last segment. If there are more than four segments, we could maybe fall back to PERIMETER mode.

@davepagurek
Copy link
Contributor

Could we use the starting point and direction of a shape's perimeter to infer the texture map, instead of requiring the user to pass in texture coordinates?

I think the answer is yes we can, but that we can't eliminate it from the API. Some explanation:

Case for manual UVs

A common technique is to "pin" texture coordinates to vertices, but then move the vertices in 3D space without changing their texture coordinates, making the texture stretch to fit the shape as the shape changes. If the texture coordinates are derived from the positions and directions in 3D space, then it becomes hard to change one without changing the other.

The nice thing about many curve formulations used in graphics is that control points are effectively the same as vertices in terms of the data they store, and how you work with them. In most 3D programs, when doing UV mapping, you can see your curves in 3D and also where their texture coordinates lie on a 2D plane. Each vertex corresponds to a point in both views. Each control point on a curve also corresponds to a point on both. This is nice because it is predictable: the same algorithms for evaluating the curve in 3D space also apply to the 2D UV coordinates. Here's a screenshot of some older software using a curved model and showing its curves in 2D space as well:

image

Lastly, since evaluating the UV along a curve is done by interpolating the control point UVs the same way one interpolates "regular" points, I think if we want to infer UVs, we'd do so by generating UV values per control point. That would mean that under the hood, we would still end up with a representation like this regardless, so I think for that reason alone, it makes sense to start with this representation.

Deriving UVs for curves

I think it's still potentially useful to give users a way of getting UVs without having to define them themselves for cases when they aren't trying to pin a specific texture map image to a specific curve. I think my worry is that this is a pretty general tool, drawing curves, and that there are a lot of edge cases that come up. Something like manual mapping eliminates all the edge cases at the cost of more manual work for the programmer; derived UVs can maybe make an easier API but it puts a lot more pressure on us to answer questions like these:

  • How do these mapping work if the curve is in 3D space, where clockwise and counter-clockwise are view-dependent concepts?
  • If you're drawing a curve that is just a small part of a larger shape, how would you specify what section you're drawing? e.g. right now it sounds like we're always mapping the curve to the full size of the texure based on either edges or the perimeter. Maybe we could make an API to specify a UV bounding box to fit the curve into (defaulting to (0,0)-(1,1)) for the cases when you don't want it to map to the entire texture?
  • In SEGMENT mode, what would you do if you're drawing a shape that has more than four segments?

I think something like this would definitely be useful, but I'm not sure I'd want to rely on that as the only API. Another thought: could we design the API in a way that allows p5 libraries to define mapping modes? e.g. if they had a manual UV mapping API available, could a library override the shape drawing behaviour for when you don't specify UVs, and derive them with whatever strategy they like?

Deriving UVs as a more general problem

One of the reasons I think deriving UVs is something that can be done separately from our curve implementation is because I think it's a useful feature for more than just curves.

One example: if you want to build a 3D shape out of spheres, you run into a similar problem to what I was saying about not mapping to the whole texture at once. Each p5 sphere has UVs that map to the whole texture. If you want to combine the spheres into one big shape, but then you apply a texture, you'll see that texture repeated onto every sphere. An idea that could help deal with that is to provide some APIs to reassign all the UVs in the whole model, and provide a few strategies for doing so:

  • the sphere mapping method: let's say you've got a 360 degree photo that you want to use as your texture, mapped to a rectangle with an equirectangular projection. One way to assign UVs to every point/control point on the model is to map is to pretend the texture is on a big sphere around your model, and see what part of the sphere each point is pointing at (uv = normalToEquirectangular(normalize(pt - center)).)
  • for 2D things, a plane method: we find the plane that best fits all the points, and either pick an orientation based on the starting point and direction, or let users provide some vector or rotation, and then assign UVs by projecting 3D points onto that plane and normalizing them into a 0-1 range
  • A more complex sphere mapping method: the first bullet's sphere mapping is based only on angle, not position. Possibly one could start with a sphere, and then slowly move its points closer to the surface of a model to shrink wrap it to your shape, and then at a certain point pick UVs on your model based on the closest sphere points? This is getting into the territory of doing new texture mapping research though haha

Anyway I'm not advocating that we build the above as part of this curve drawing API, but it hopefully just paints the picture that we're tapping into a fairly complex problem that has a lot of directions it can go in, and why my inclination is to try to build something that others can build their own methods on top of in addition to some simpler solutions.

@GregStanton
Copy link
Collaborator

GregStanton commented Dec 29, 2023

Thank you so much for your thoughtful reply @davepagurek! There's definitely a lot to consider. I'll start by sharing some initial thoughts about the API, under the assumption that users will manually specify texture coordinates in all cases. I still want to think about this more, but I'm pretty excited about it, since I think the API change alone could be a big improvement.

API options for vertex functions

Here are three options, exemplified by the case of Bézier curves:

  • Option 1: bezierVertex(x2, y2, z2, u2, v2, x3, y3, z3, u3, v3, x4, y4, z4, u4, v4)
  • Option 2: bezierControlPoint(x, y, z, u, v) (called multiple times)
  • Option 3: bezierVertex(x, y, z, u, v) (called multiple times)

The third option, which I added, is the same as Option 2 but uses "Vertex" instead of "ControlPoint."

Advantages of Option 3 (and for the most part, Option 2)

The new API could improve readability, consistency, and extensibility.

  • Readability
    • The new names would be more accurate, since each command really would specify only one vertex. The current names are a bit confusing since a function like bezierVertex() takes multiple points and really specifies a curve, not just a vertex. In fact, the corresponding commands in the native canvas API and the SVG specification work basically the same way as p5's commands, and they're "curve" commands, not "vertex" commands. This discrepancy was observed by @zenozeng back in 2015.
    • It doesn't introduce new function names to the API, which may ease the transition to the new usage (e.g. the reference can still say that shapes are made from a series of vertices, rather than a mix of vertices and control points). If we want to implement the change now, we could deprecate the old usage before removing it in p5.js 2.0.
    • It won't require users to remember that some points are called "vertices" while others are called "control points." In both cases, these are points that are used to specify paths. Also, curveVertex() already uses "Vertex" in its name to refer to points that may or may not be on the curve itself.
    • It eliminates inconsistent usage of the term "vertex." As already noted, curveVertex() specifies points that only guide the curve and points that are actually on the curve, so it forces us to interpret control points and vertices as being the same. However, bezierVertex() forces us to interpret control points and vertices as distinct concepts: this function has a singular name but takes coordinates for three points, which only makes sense if we distinguish control points (the first two points) from vertices (the third point). This is a problem I've been wishing we could fix, and the new API manages to fix it!
    • It would eliminate long parameter lists, as already noted. We could keep the original parameter list for arcVertex(); the parameter list only contains coordinates for one point, so it's less confusing than a command like bezierVertex().
    • It simplifies method signatures and eliminates the need to keep multiple signatures for the same function, without increasing the size of the API. For example, right now, the reference page for bezierVertex() specifies two signatures: bezierVertex(x2, y2, x3, y3, x4, y4) and bezierVertex(x2, y2, z2, x3, y3, z3, x4, y4, z4). In the new API, there would be only one simple signature: bezierVertex(x, y, [z], [u], [v]). This is more beginner friendly.
    • Since the bezierVertex() function would work more like curveVertex(), it eliminates the need for a separate quadraticVertex() command. This has multiple benefits:
      • Eliminating quadraticVertex() improves API consistency, since it's already the case that there is no quadraticBezier() curve command; there's just a bezier() command.
      • Having a single vertex function for Bézier curves also introduces the possibility of supporting higher-order Bézier curves beyond quadratic and cubic Béziers, without adding to the size of the API. In other words, we could reduce the size of the API while simultaneously making it more powerful and easier to understand.
      • Eliminating quadraticVertex() eliminates confusion arising from a conflicting use of the term "quadratic." Math students learn that the vertex of a quadratic equation's graph corresponds to its maximum or minimum value, which is not true of a quadratic Bézier vertex). If we want, we could deprecate quadraticVertex() before removing it entirely in the next major version of p5.js.
  • Consistency:
    • It would create consistency with the current curveVertex() API, which people are already used to.
    • The new API would use one function call per point for all path primitives. The current API bundles multiple points together into a single function call for Bézier curves, but uses one function call per point for polylines and Catmull-Rom splines.
    • The new API would allow the user to specify the first point on a Bézier curve with bezierVertex(). The current API allows the user to call vertex() to start and continue a polyline, and to call curveVertex() to start and continue a Catmull-Rom spline; however, to start a shape with a quadratic or cubic Bézier curve, the user needs to mix commands, specifying only the first vertex with vertex(). (Since the vertices are all specified in a common (x, y, [z], [u], [v]) format, the implementation will just set it aside as the first vertex regardless of the vertex type. So, supporting the old syntax shouldn't require us to complicate the codebase.)
  • Extensibility:
    • If we use only bezierVertex(), we open up the possibility of creating higher-order Bézier curves without increasing the size of the p5.js API, as noted above.
    • This would make it possible to support Bézier surfaces without any change to the API. For example, a quadratic Bézier triangle is defined by six control points. If we specify the first of these with vertex() and specify the next four with quadraticVertex(), we only have five vertices. (We could potentially use vertex() to specify the sixth vertex at the end, but this is inconsistent with how vertex() is used everywhere else, and it requires us to mix command types to create a single primitive.)

Disadvantages

  • Right now, p5's API is actually quite similar to the native canvas and SVG APIs for specifying paths. For example, bezierVertex() corresponds to the canvas's bezierCurveTo() and SVG's C command. They all take two control points and one vertex. Changing the API would mean a bigger departure from these commonly used APIs (as well as the Processing API). However, beginners probably won't tend to know those other APIs anyway, and more experienced users may have less trouble adapting, so this doesn't seem like a major concern.
  • Changing the public API will typically cause some confusion (e.g. old YouTube tutorials may end up using deprecated syntax or syntax that's no longer supported, and add-on libraries that want to use the most recent version of p5.js will need to update their code if we ultimately remove support for the old usage). However, p5.js has already reserved the right to make breaking changes in major releases, and that policy may have been set with just this type of situation in mind.

Other advantages or disadvantages?

If others want to help extend these lists of advantages and disadvantages, I'd be happy to incorporate their comments into the lists above (with links to the original comments). That way we can compile everyone's thoughts in one place.

Implement now?

If we reach a consensus, it seems like we could solve the original requirements of this issue now (as part of #6560), rather than waiting for the next major version of p5.js. If we use "Vertex" in all the function names instead of "ControlPoint," we wouldn't need to maintain separate reference pages for deprecated features. Later, we could eliminate deprecated usage altogether in p5.js 2.0, which would eliminate any performance hit caused by having to process two types of parameter lists.

@capGoblin
Copy link
Contributor

Just wanna save my progress on this issue, this sketch has working examples of passing texture coordinates to bezierVertex, quadraticVertex, and curveVertex by calling them multiple times.

@nijatmursali
Copy link

Is there any solution to this? I am also trying to implement it, but it gives error that bezierVertex expects maximum of 9 parameters.

@davepagurek
Copy link
Contributor

Not yet currently, but this is something we aim to enable in our 2.0 release!

@nijatmursali
Copy link

Thank you for the reply, @davepagurek. Is there any date for release? We are currently working on big project which we have to place textures on each shape, and most of our shapes inclue bezier and quadratic vertex.

@davepagurek
Copy link
Contributor

Not yet, so for now your best bet will be to manually convert your beziers to polylines that you can use with vertex(). To do this, you can use bezierPoint() to calculate positions along your curve. Normally you'd use three calls to this for x, y, and z, but for texture coordinates, you can add two additional calls for u and v.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Features Don't Work in All Contexts
Development

No branches or pull requests

6 participants