-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipfs while mostly idle generates ~2GB/day of mDNS traffic #8695
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
Hi @jmesmon, thank you for digging into this. A few thoughts:
We probably won't have the bandwidth to debug this any time soon. It looks like you already have a pretty good testing environment set up. We'd really appreciate if you want to debug this and submit a fix! |
@jmesmon, any update on this? |
I have not investigated this further. I have observed (after some system restarts that started ipfs again) that 3 processes are involved in the mdns traffic now: chrome, mdns-responder, and ipfs. It's possible that there's some combination of misbehavior. The high network usage continued to occur, so I've disabled ipfs autostarting on all my systems. |
[Background: I know Go and IPFS but I'm learning libp2p/mDNS as I go; please fill in the blanks when needed.]
I'm blocking on comments on the above before moving forward as I know very little about all of this to make an informed decision on how to proceed. |
We've removed old mDNS implementation in Kubo 0.14 (#9048). If this problem comes up again, please open a new issue to indicate it is related to the new mDNS. |
Checklist
Installation method
ipfs-desktop
Version
Config
Description
A few days ago, I started running GlassWire to observe general trends in network usage. Right now it shows ipfs.exe as utilizing 1.9 GB of data over the past day in purely mDNS traffic (ipfs also has the highest network utilization overall on this system. An additional 2.5 GB of traffic is categorized as "Other" by GlassWire). The recv and send amounts are balanced here, so it does appear that this is not simply caused by large amounts of mDNS traffic on my local network.
I captured, with wireshark, the ipv4 traffic on this machine that was sourced from the machine itself and was mDNS traffic.
It appeared the machine (unable to definitively isolate this to ipfs) was sending various queries ("QM" question) for named hosts and a service discovery query for
_ipfs-discovery._udp.local
(this one almost certainly generated by ipfs).Then, ever 60 seconds, a large, identical, query response was broadcast 66 times in close succession. Here's an example summary from wireshark (I have the full content if useful) of 3 packets, 2 from one grouping of 66 retransmits, and 1 from the next. Click the arrow to expand them.
packet 1, group 1: 1007613 781.041374 192.168.6.137 224.0.0.251 MDNS 619 Standard query response 0xb81c PTR jcd6q49orzi1arpquo79g0iield0aqub8wzqmm3yxrhft5kh4unygym6wxrc._p2p._udp.local SRV 0 0 4001 jcd6q49orzi1arpquo79g0iield0aqub8wzqmm3yxrhft5kh4unygym6wxrc.local TXT A 100.100.231.29 AAAA fd7a:115c:a1e0:ab12:4843:cd96:6264:e71d
packet 2, group 1: 1007618 781.041664 192.168.6.137 224.0.0.251 MDNS 619 Standard query response 0xb81c PTR jcd6q49orzi1arpquo79g0iield0aqub8wzqmm3yxrhft5kh4unygym6wxrc._p2p._udp.local SRV 0 0 4001 jcd6q49orzi1arpquo79g0iield0aqub8wzqmm3yxrhft5kh4unygym6wxrc.local TXT A 100.100.231.29 AAAA fd7a:115c:a1e0:ab12:4843:cd96:6264:e71d
packet 1, group 2: 1043235 841.044920 192.168.6.137 224.0.0.251 MDNS 619 Standard query response 0x0f0d PTR jcd6q49orzi1arpquo79g0iield0aqub8wzqmm3yxrhft5kh4unygym6wxrc._p2p._udp.local SRV 0 0 4001 jcd6q49orzi1arpquo79g0iield0aqub8wzqmm3yxrhft5kh4unygym6wxrc.local TXT A 100.100.231.29 AAAA fd7a:115c:a1e0:ab12:4843:cd96:6264:e71d
The content of this message suggests that ipfs is the originating program. While I have not examined it closely, at the same time these ipv4 mdns packets are being sent out 66 times, it appears ipv6 packets of the same payload are also being sent at a similar rate. It may (or may not) be worth noting that these packets are large enough that they use ip framentation to be transmitted over the ethernet link.
Also notable is that it appears that the query resulting in this sending of 66 query responses is identified by wireshark as being a query for PTR _p2p._udp.local sent from the same host (I expect this to be a query sent by the same ipfs instance that is sending the responses).
Thoughts:
Some mDNS standard links that indicate this behavior is not good:
mDNS Announcing describes how a host should send unsolicited query responses. Updating expands on this by describing when to re-announce.
Announcing
provides maximum numbers of unsolicied query responses per announce (and time between them)It also notes that regular updates are not permitted:
Though this is modified by
Updating
, it still remains stricter than the observed behavior, stating that updates more often than every 10 minutes should not be done, and advises the creation of a separate protocol to manage updates that must happen more often than that.The text was updated successfully, but these errors were encountered: