Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPv6 relay support #462

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

hetelek
Copy link

@hetelek hetelek commented Jun 4, 2022

Requires these changes: pion/turn#265
I've only tested on Linux with UDP turn servers.

This adds support for IPv6 relays. When discovering candidates, Pion will attempt to contact the turn servers via both IPv4 and IPv6. You can also specify a IPv6 literal address in the config, such as:

{
    "iceServers": [
        {
            "urls": [
                "turn:[2001:0db8:85a3:0000:0000:8a2e:0370:7334]:3478"
            ],
            "username": "user1",
            "credential": "password1"
        }
    ]
}

@@ -598,7 +617,8 @@ func (a *Agent) gatherCandidatesRelay(ctx context.Context, urls []*URL) { //noli
relayProtocol = "dtls"
locConn = &fakePacketConn{conn}
case url.Proto == ProtoTypeTCP && url.Scheme == SchemeTypeTURNS:
conn, connectErr := tls.Dial(NetworkTypeTCP4.String(), TURNServerAddr, &tls.Config{
network = tcpNetworkType
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, network was hardcoded to NetworkTypeUDP4 even if the url.Proto == ProtoTypeTCP.
Not sure if this was correct?

@hetelek
Copy link
Author

hetelek commented Jun 4, 2022

What you do here is that you have the user decide whether to use IPv4 or IPv6. Then, you try judst one address of the relay, either the first IPv4 address or the first IPv6 address.

This requires no extra configuration from the user. The example in the description shows that a IPv6 address can be used, but it's not required. If a hostname is provided (e.g., turn:turnserver:3578), then Pion will attempt to gather candidates from it via both IPv4 and IPv6

@jech
Copy link
Member

jech commented Jun 4, 2022

I stand corrected. The iteration happens at a higher level than I was expecting.

I need time to grok the fullness.

@jech
Copy link
Member

jech commented Jun 8, 2022

I strongly support this work. IPv6 support is necessary for Pion, it greatly simplifies connectivity in many cases (e.g. in Docker containers).

This version works by gathering two candidates for each double-stack TURN server, one for each address family. Two comments:

  1. For TCP servers, this might be overkill ­— it should be enough to call net.Dial("tcp",...), and have the network stack perform ordinary probing for listening addresses.
  2. For UDP servers, this might not be enough — the server may have multiple IPv6 addresses, only some of which are reachable at a given time. Shouldn't we be generating one candidate for each address?

@hetelek
Copy link
Author

hetelek commented Jun 11, 2022

2. Shouldn't we be generating one candidate for each address?

Yes, that sounds like a better approach. I'm thinking we should limit the max amount of addresses just in case the DNS lookup returns many IPs. For example, max 5 IPv6 and 5 IPv4 addresses per relay.

  1. For TCP servers, this might be overkill

I really don't know how TCP works here, so looking for guidance. It's not clear to me why we would force IPv6/4 for UDP but not TCP.

Also, will this break TLS since we will be passing IP addresses instead of hostnames? With the web/https, the hostname is checked against the cert's common name. Not sure if that's also applicable here.

@stv0g
Copy link
Member

stv0g commented Jan 12, 2023

Can we separate the retry logic into a separate PR? I think its unrelated to the IPv6 support..

Also pion/turn#265 might get obsolete if pion/turn#276 gets merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants