Compare commits

..

77 Commits

Author SHA1 Message Date
Marwan Sulaiman
df49cb24d5 ipn, ipn/ipnlocal: add an in memory serve config
This PR adds a parallel in-memory ServeConfig so that foreground
funnels are guaranteed to go away in case of unexpected shutdown

Updates #8489

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2023-08-24 15:38:54 +01:00
Marwan Sulaiman
9c07f4f512 all: replace deprecated ioutil references
This PR removes calls to ioutil library and replaces them
with their new locations in the io and os packages.

Fixes #9034
Updates #5210

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2023-08-23 23:53:19 +01:00
Denton Gentry
1b8a538953 scripts/installer.sh: add CloudLinux and Alibaba Linux
Fixes https://github.com/tailscale/tailscale/issues/9010

Signed-off-by: Denton Gentry <dgentry@tailscale.com>
2023-08-23 15:29:17 -07:00
Sonia Appasamy
776f9b5875 client/web: open auth URLs in new browser tab
Open control server auth URLs in new browser tabs on web clients
so users don't loose original client URL when redirected for login.

Updates tailscale/corp#13775

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2023-08-23 17:38:50 -04:00
Brad Fitzpatrick
ad9b711a1b tailcfg: bump capver to 72 to restore UPnP
Actually fixed in 77ff705545 but that was cherry-picked to a branch
and we don't bump capver in branches.

This tells the control plane that UPnP should be re-enabled going
forward.

Updates #8992

Change-Id: I5c4743eb52fdee94175668c368c0f712536dc26b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-23 13:55:39 -07:00
Brad Fitzpatrick
ea4425d8a9 ipn/ipnlocal, wgengine/magicsock: move UpdateStatus stuff around
Upcoming work on incremental netmap change handling will require some
replumbing of which subsystems get notified about what. Done naively,
it could break "tailscale status --json" visibility later. To make sure
I understood the flow of all the updates I was rereading the status code
and realized parts of ipnstate.Status were being populated by the wrong
subsystems.

The engine (wireguard) and magicsock (data plane, NAT traveral) should
only populate the stuff that they uniquely know. The WireGuard bits
were fine but magicsock was populating stuff stuff that LocalBackend
could've better handled, so move it there.

Updates #1909

Change-Id: I6d1b95d19a2d1b70fbb3c875fac8ea1e169e8cb0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-23 13:35:47 -07:00
Maisem Ali
74388a771f cmd/k8s-operator: fix regression from earlier refactor
I forgot to move the defer out of the func, so the tsnet.Server
immediately closed after starting.

Updates #502

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-23 15:14:29 -04:00
Brad Fitzpatrick
9089efea06 net/netmon: make ChangeFunc's signature take new ChangeDelta, not bool
Updates #9040

Change-Id: Ia43752064a1a6ecefc8802b58d6eaa0b71cf1f84
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-23 10:42:14 -07:00
Sonia Appasamy
78f087aa02 cli/web: pass existing localClient to web client
Updates tailscale/corp#13775

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2023-08-23 13:25:11 -04:00
David Anderson
5cfa85e604 tsweb: clean up pprof handler registration, document why it's there
Updates #cleanup

Signed-off-by: David Anderson <danderson@tailscale.com>
2023-08-23 10:16:14 -07:00
Will Norris
09068f6c16 release: add empty embed.FS for release files
This ensures that `go mod vendor` includes these files, which are needed
for client builds run in corp.

Updates tailscale/corp#13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-23 09:54:10 -07:00
Maisem Ali
836f932ead cmd/k8s-operator: split operator.go into svc.go/sts.go
Updates #502

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-23 12:07:07 -04:00
Maisem Ali
7f6bc52b78 cmd/k8s-operator: refactor operator code
It was jumbled doing a lot of things, this breaks it up into
the svc reconciliation and the tailscale sts reconciliation.

Prep for future commit.

Updates #502

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-23 12:07:07 -04:00
Will Norris
cf45d6a275 client/web: remove old /redirect handler
I thought this had something to do with Synology or QNAP support, since
they both have specific authentication logic.  But it turns out this was
part of the original web client added in #1621, and then refactored as
part of #2093.  But with how we handle logging in now, it's never
called.

Updates tailscale/corp#13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-22 16:39:30 -07:00
Andrew Lytvynov
05523bdcdd release/dist/cli: add gen-key command (#9023)
Add a new subcommand to generate a Ed25519 key pair for release signing.
The same command can be used to generate both root and signing keys.

Updates #8760

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2023-08-22 16:29:56 -07:00
James Tucker
e1c7e9b736 wgengine/magicsock: improve endpoint selection for WireGuard peers with rx time
If we don't have the ICMP hint available, such as on Android, we can use
the signal of rx traffic to bias toward a particular endpoint.

We don't want to stick to a particular endpoint for a very long time
without any signals, so the sticky time is reduced to 1 second, which is
large enough to avoid excessive packet reordering in the common case,
but should be small enough that either rx provides a strong signal, or
we rotate in a user-interactive schedule to another endpoint, improving
the feel of failover to other endpoints.

Updates #8999

Co-authored-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>

Signed-off-by: James Tucker <james@tailscale.com>
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2023-08-22 15:39:08 -07:00
James Tucker
5edb39d032 wgengine/magicsock: clear out endpoint statistics when it becomes bad
There are cases where we do not detect the non-viability of a route, but
we will instead observe a failure to send. In a Disco path this would
normally be handled as a side effect of Disco, which is not available to
non-Disco WireGuard nodes. In both cases, recognizing the failure as
such will result in faster convergence.

Updates #8999
Signed-off-by: James Tucker <james@tailscale.com>
2023-08-22 15:22:50 -07:00
Charlotte Brandhorst-Satzkorn
7c9c68feed wgengine/magicsock: update lastfullping comment to include wg only
LastFullPing is now used for disco or wireguard only endpoints. This
change updates the comment to make that clear.

Updates #7826

Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
2023-08-22 14:31:19 -07:00
Aaron Klotz
ea693eacb6 util/winutil: add RegisterForRestart, allowing programs to indicate their preferences to the Windows restart manager
In order for the installer to restart the GUI correctly post-upgrade, we
need the GUI to be able to register its restart preferences.

This PR adds API support for doing so. I'm adding it to OSS so that it
is available should we need to do any such registrations on OSS binaries
in the future.

Updates https://github.com/tailscale/corp/issues/13998

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2023-08-22 15:06:48 -06:00
James Tucker
3a652d7761 wgengine/magicsock: clear endpoint state in noteConnectivityChange
There are latency values stored in bestAddr and endpointState that are
no longer applicable after a connectivity change and should be cleared
out, following the documented behavior of the function.

Updates #8999

Signed-off-by: James Tucker <james@tailscale.com>
2023-08-22 13:38:20 -07:00
Andrew Lytvynov
7364c6beec clientupdate/distsign: add new library for package signing/verification (#8943)
This library is intended for use during release to sign packages which
are then served from pkgs.tailscale.com.
The library is also then used by clients downloading packages for
`tailscale update` where OS package managers / app stores aren't used.

Updates https://github.com/tailscale/tailscale/issues/8760
Updates https://github.com/tailscale/tailscale/issues/6995

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2023-08-22 13:35:30 -07:00
Maisem Ali
4b13e6e087 go.mod: bump golang.org/x/net
Theory is that our long lived http2 connection to control would
get tainted by _something_ (unclear what) and would get closed.

This picks up the fix for golang/go#60818.

Updates tailscale/corp#5761

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-22 16:25:19 -04:00
Will Norris
5ebff95a4c client/web: fix globbing for file embedding
src/**/* was only grabbing files in subdirectories, but not in the src
directory itself.

Updates tailscale/corp#13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-22 12:42:34 -07:00
Marwan Sulaiman
000c0a70f6 ipn, ipn/ipnlocal: clean up documentation and use clock instead of time
This PR addresses a number of the follow ups from PR #8491 that were written
after getting merged.

Updates #8489

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2023-08-22 19:17:29 +01:00
Will Norris
0df5507c81 client/web: combine embeds into a single embed.FS
instead of embedding each file individually, embed them all into a
single embed filesystem.  This is basically a noop for the current
frontend, but sets things up a little cleaner for the new frontend.

Also added an embed.FS for the source files needed to build the new
frontend. These files are not actually embedded into the binary (since
it is a blank identifier), but causes `go mod vendor` to copy them into
the vendor directory.

Updates tailscale/corp#13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-22 11:17:16 -07:00
Will Norris
3722b05465 release/dist: run yarn build before building CLI
This builds the assets for the new web client as part of our release
process. The path to the web client source is specified by the
-web-client-root flag.  This allows corp builds to first vendor the
tailscale.com module, and then build the web client assets in the vendor
directory.

The default value for the -web-client-root flag is empty, so no assets
are built by default.

This is an update of the previously reverted 0fb95ec

Updates tailscale/corp#13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-22 11:12:47 -07:00
Sonia Appasamy
09e5e68297 client/web: track web client initializations
Updates tailscale/corp#13775

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2023-08-22 14:11:19 -04:00
Brad Fitzpatrick
947def7688 types/netmap: remove redundant Netmap.Hostinfo
It was in SelfNode.Hostinfo anyway. The redundant copy was just
costing us an allocation per netmap (a Hostinfo.Clone).

Updates #1909

Change-Id: Ifac568aa5f8054d9419828489442a0f4559bc099
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-22 09:54:02 -07:00
Sonia Appasamy
50b558de74 client/web: hook up remaining legacy POST requests
Hooks up remaining legacy POST request from the React side in --dev.

Updates tailscale/corp#13775

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2023-08-22 12:42:12 -04:00
Brad Fitzpatrick
db017d3b12 control/controlclient: remove quadratic allocs in mapSession
The mapSession code was previously quadratic: N clients in a netmap
send updates proportional to N and then for each, we do N units of
work. This removes most of that "N units of work" per update. There's
still a netmap-sized slice allocation per update (that's #8963), but
that's it.

Bit more efficient now, especially with larger netmaps:

                                 │     before     │                after                │
                                 │     sec/op     │   sec/op     vs base                │
    MapSessionDelta/size_10-8       47.935µ ±  3%   1.232µ ± 2%  -97.43% (p=0.000 n=10)
    MapSessionDelta/size_100-8      79.950µ ±  3%   1.642µ ± 2%  -97.95% (p=0.000 n=10)
    MapSessionDelta/size_1000-8    355.747µ ± 10%   4.400µ ± 1%  -98.76% (p=0.000 n=10)
    MapSessionDelta/size_10000-8   3079.71µ ±  3%   27.89µ ± 3%  -99.09% (p=0.000 n=10)
    geomean                          254.6µ         3.969µ       -98.44%

                                 │     before     │                after                 │
                                 │      B/op      │     B/op      vs base                │
    MapSessionDelta/size_10-8        9.651Ki ± 0%   2.395Ki ± 0%  -75.19% (p=0.000 n=10)
    MapSessionDelta/size_100-8      83.097Ki ± 0%   3.192Ki ± 0%  -96.16% (p=0.000 n=10)
    MapSessionDelta/size_1000-8     800.25Ki ± 0%   10.32Ki ± 0%  -98.71% (p=0.000 n=10)
    MapSessionDelta/size_10000-8   7896.04Ki ± 0%   82.32Ki ± 0%  -98.96% (p=0.000 n=10)
    geomean                          266.8Ki        8.977Ki       -96.64%

                                 │    before     │               after                │
                                 │   allocs/op   │ allocs/op   vs base                │
    MapSessionDelta/size_10-8         72.00 ± 0%   20.00 ± 0%  -72.22% (p=0.000 n=10)
    MapSessionDelta/size_100-8       523.00 ± 0%   20.00 ± 0%  -96.18% (p=0.000 n=10)
    MapSessionDelta/size_1000-8     5024.00 ± 0%   20.00 ± 0%  -99.60% (p=0.000 n=10)
    MapSessionDelta/size_10000-8   50024.00 ± 0%   20.00 ± 0%  -99.96% (p=0.000 n=10)
    geomean                          1.754k        20.00       -98.86%

Updates #1909

Change-Id: I41ee29358a5521ed762216a76d4cc5b0d16e46ac
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-22 08:59:57 -07:00
shayne
a3b0654ed8 .github: add flakehub-publish-tagged.yml (#9009)
This workflow will publish a flake to flakehub when a tag is pushed to
the repository. It will only publish tags that match the pattern
`v*.*.*`.

Fixes #9008

Signed-off-by: Shayne Sweeney <shayne@tailscale.com>
2023-08-22 11:18:29 -04:00
Marwan Sulaiman
35ff5bf5a6 cmd/tailscale/cli, ipn/ipnlocal: [funnel] add stream mode
Adds ability to start Funnel in the foreground and stream incoming
connections. When foreground process is stopped, Funnel is turned
back off for the port.

Exampe usage:
```
TAILSCALE_FUNNEL_V2=on tailscale funnel 8080
```

Updates #8489

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2023-08-22 10:07:34 -04:00
Brad Fitzpatrick
cb4a61f951 control/controlclient: don't clone self node on each NetworkMap
Drop in the bucket, but have to start somewhere.

Real wins will come once this is done for peers.

                                 │     before     │                after                │
                                 │      B/op      │     B/op       vs base              │
    MapSessionDelta/size_10-8      10.213Ki ± ∞ ¹   9.650Ki ± ∞ ¹  -5.51% (p=0.008 n=5)
    MapSessionDelta/size_100-8      83.64Ki ± ∞ ¹   83.08Ki ± ∞ ¹  -0.67% (p=0.008 n=5)
    MapSessionDelta/size_1000-8     800.8Ki ± ∞ ¹   800.3Ki ± ∞ ¹  -0.07% (p=0.008 n=5)
    MapSessionDelta/size_10000-8    7.712Mi ± ∞ ¹   7.711Mi ± ∞ ¹  -0.01% (p=0.008 n=5)
    geomean                         271.1Ki         266.8Ki        -1.59%

                                 │    before    │               after                │
                                 │  allocs/op   │  allocs/op    vs base              │
    MapSessionDelta/size_10-8       73.00 ± ∞ ¹    72.00 ± ∞ ¹  -1.37% (p=0.008 n=5)
    MapSessionDelta/size_100-8      524.0 ± ∞ ¹    523.0 ± ∞ ¹  -0.19% (p=0.008 n=5)
    MapSessionDelta/size_1000-8    5.025k ± ∞ ¹   5.024k ± ∞ ¹  -0.02% (p=0.008 n=5)
    MapSessionDelta/size_10000-8   50.02k ± ∞ ¹   50.02k ± ∞ ¹  -0.00% (p=0.040 n=5)
    geomean                        1.761k         1.754k        -0.40%

Updates #1909

Change-Id: Ie19dea3371de251d64d4373dd00422f53c2675ea
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-21 15:42:33 -07:00
Will Norris
a461d230db Revert "release/dist: run yarn build before building CLI"
This caused breakages on the build server:

synology/dsm7/x86_64: chdir /home/ubuntu/builds/2023-08-21T21-47-38Z-unstable-main-tagged-devices/0/client/web: no such file or directory
synology/dsm7/i686: chdir /home/ubuntu/builds/2023-08-21T21-47-38Z-unstable-main-tagged-devices/0/client/web: no such file or directory
synology/dsm7/armv8: chdir /home/ubuntu/builds/2023-08-21T21-47-38Z-unstable-main-tagged-devices/0/client/web: no such file or directory
...

Reverting while I investigate.

This reverts commit 0fb95ec07d.

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-21 14:56:05 -07:00
Will Norris
0fb95ec07d release/dist: run yarn build before building CLI
This builds the assets for the new web client as part of our release
process. These assets will soon be embedded into the cmd/tailscale
binary, but are not actually done so yet.

Updates tailscale/corp#13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-21 14:30:59 -07:00
Brad Fitzpatrick
84b94b3146 types/netmap, all: make NetworkMap.SelfNode a tailcfg.NodeView
Updates #1909

Change-Id: I8c470cbc147129a652c1d58eac9b790691b87606
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-21 13:34:49 -07:00
License Updater
699f9699ca licenses: update tailscale{,d} licenses
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2023-08-21 12:36:37 -07:00
Flakes Updater
f6615931d7 go.mod.sri: update SRI hash for go.mod changes
Signed-off-by: Flakes Updater <noreply+flakes-updater@tailscale.com>
2023-08-21 12:04:38 -07:00
Sonia Appasamy
077bbb8403 client/web: add csrf protection to web client api
Adds csrf protection and hooks up an initial POST request from
the React web client.

Updates tailscale/corp#13775

Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
2023-08-21 15:02:02 -04:00
Andrew Dunham
77ff705545 net/portmapper: never select port 0 in UPnP
Port 0 is interpreted, per the spec (but inconsistently among router
software) as requesting to map every single available port on the UPnP
gateway to the internal IP address. We'd previously avoided picking
ports below 1024 for one of the two UPnP methods (in #7457), and this
change moves that logic so that we avoid it in all cases.

Updates #8992

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I20d652c0cd47a24aef27f75c81f78ae53cc3c71e
2023-08-21 14:33:26 -04:00
Brad Fitzpatrick
b5ff68a968 control/controlclient: flesh out mapSession to break up gigantic method
Now mapSession has a bunch more fields and methods, rather than being
just one massive func with a ton of local variables.

So far there are no major new optimizations, though. It should behave
the same as before.

This has been done with an eye towards testability (so tests can set
all the callback funcs as needed, or not, without a huge Direct client
or long-running HTTP requests), but this change doesn't add new tests
yet. That will follow in the changes which flesh out the NetmapUpdater
interface.

Updates #1909

Change-Id: Iad4e7442d5bbbe2614bd4b1dc4b02e27504898df
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-21 10:38:32 -07:00
Brad Fitzpatrick
1b223566dd util/linuxfw: fix typo in unexported doc comment
And flesh it out and use idiomatic doc style ("whether" for bools)
and end in a period while there anyway.

Updates #cleanup

Change-Id: Ieb82f13969656e2340c3510e7b102dc8e6932611
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-21 10:14:28 -07:00
Val
c85d7c301a tool: force HTTP/1.1 in curl to prevent hang behind load balancer
When running in our github CI environment, curl sometimes hangs while closing
the download from the nodejs.org server and fails with INTERNAL_ERROR. This is
likely caused by CI running behind some kind of load balancer or proxy that
handles HTTP/2 incorrectly in some minor way, so force curl to use HTTP 1.1.

Updates #8988

Signed-off-by: Val <valerie@tailscale.com>
2023-08-21 08:37:26 -07:00
Denton Gentry
f486041fd1 tsnet: add support for clientmetrics.
Updates https://github.com/tailscale/tailscale/issues/1748

Signed-off-by: Denton Gentry <dgentry@tailscale.com>
2023-08-21 06:26:40 -07:00
Val
c15997511d wgengine/magicsock: only accept pong sent by CLI ping
When sending a ping from the CLI, only accept a pong that is in reply
to the specific CLI ping we sent.

Updates #311

Signed-off-by: Val <valerie@tailscale.com>
2023-08-21 01:57:41 -07:00
Brad Fitzpatrick
165f0116f1 types/netmap: move some mutations earlier, remove, document some fields
And optimize the Persist setting a bit, allocating later and only mutating
fields when there's been a Node change.

Updates #1909

Change-Id: Iaddfd9e88ef76e1d18e8d0a41926eb44d0955312
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-20 16:26:11 -07:00
Brad Fitzpatrick
21170fb175 control/controlclient: scope a variable tighter, de-pointer a *time.Time
Just misc cleanups.

Updates #1909

Change-Id: I9d64cb6c46d634eb5fdf725c13a6c5e514e02e9a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-20 15:06:24 -07:00
Maisem Ali
2548496cef types/views,cmd/viewer: add ByteSlice[T] to replace mem.RO
Add a new views.ByteSlice[T ~[]byte] to provide a better API to use
with views.

Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-20 15:30:35 -04:00
Maisem Ali
8a5ec72c85 cmd/cloner: use maps.Clone and ptr.To
Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-20 13:47:26 -04:00
Brad Fitzpatrick
4511e7d64e ipn/ipnstate: add PeerStatus.AltSharerUserID, stop mangling Node.User
In b987b2ab18 (2021-01-12) when we introduced sharing we mapped
the sharer to the userid at a low layer, mostly to fix the display of
"tailscale status" and the client UIs, but also some tests.

The commit earlier today, 7dec09d169, removed the 2.5yo option
to let clients disable that automatic mapping, as clearly we were never
getting around to it.

This plumbs the Sharer UserID all the way to ipnstatus so the CLI
itself can choose to print out the Sharer's identity over the node's
original owner.

Then we stop mangling Node.User and let clients decide how they want
to render things.

To ease the migration for the Windows GUI (which currently operates on
tailcfg.Node via the NetMap from WatchIPNBus, instead of PeerStatus),
a new method Node.SharerOrUser is added to do the mapping of
Sharer-else-User.

Updates #1909
Updates tailscale/corp#1183

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-20 08:18:52 -07:00
Maisem Ali
d483ed7774 tailcfg: generate RegisterResponse.Clone, remove manually written
It had a custom Clone func with a TODO to replace with cloner, resolve
that todo. Had to pull out the embedded Auth struct into a named struct.

Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-19 23:35:57 -04:00
Brad Fitzpatrick
282dad1b62 tailcfg: update docs on NetInfo.FirewallMode
Updates #391

Change-Id: Ifef196b31dd145f424fb0c0d0bb04565cc22c717
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-19 20:19:33 -07:00
Brad Fitzpatrick
d8191a9813 ipn/ipnlocal: fix regression in printf arg type
I screwed this up in 58a4fd43d as I expected. I even looked out for
cases like this (because this always happens) and I still missed
it. Vet doesn't flag these because they're not the standard printf
funcs it knows about. TODO: make our vet recognize all our
"logger.Logf" types.

Updates #8948

Change-Id: Iae267d5f81da49d0876b91c0e6dc451bf7dcd721
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-19 20:03:11 -07:00
Brad Fitzpatrick
f35ff84ee2 util/deephash: relax an annoyingly needy test
I'd added a test case of deephash against a tailcfg.Node to make sure
it worked at all more than anything. We don't care what the exact
bytes are in this test, just that it doesn't fail. So adjust for that.

Then when we make changes to tailcfg.Node and types under it, we don't
need to keep adjusting this test.

Updates #cleanup

Change-Id: Ibf4fa42820aeab8f5292fe65f9f92ffdb0b4407b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-19 19:57:03 -07:00
Brad Fitzpatrick
93a806ba31 types/tkatype: add test for MarshaledSignature's JSON format
Lock in its wire format before a potential change to its Go type.

Updates #1909

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-19 19:34:18 -07:00
Brad Fitzpatrick
7dec09d169 control/controlclient: remove Opts.KeepSharerAndUserSplit
It was added 2.5 years ago in c1dabd9436 but was never used.
Clearly that migration didn't matter.

We can attempt this again later if/when this matters.

Meanwhile this simplifies the code and thus makes working on other
current efforts in these parts of the code easier.

Updates #1909

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-19 15:06:05 -07:00
Maisem Ali
02b47d123f tailcfg: remove unused Domain field from Login/User
Updates #cleanup

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-18 20:07:17 -07:00
Brad Fitzpatrick
58a4fd43d8 types/netmap, all: use read-only tailcfg.NodeView in NetworkMap
Updates #8948

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-18 20:04:35 -07:00
KevinLiang10
b040094b90 util/linuxfw: reorganize nftables rules to allow it to work with ufw
This commit tries to mimic the way iptables-nft work with the filewall rules. We
follow the convention of using tables like filter, nat and the conventional
chains, to make our nftables implementation work with ufw.

Updates: #391

Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
2023-08-18 18:24:05 -07:00
Will Norris
d4586ca75f tsnet/example/web-client: listen on localhost
Serving the web client on the tailscale interface, while useful for
remote management, is also inherently risky if ACLs are not configured
appropriately. Switch the example to listen only on localhost, which is
a much safer default. This is still a valuable example, since it still
demonstrates how to have a web client connected to a tsnet instance.

Updates #13775

Signed-off-by: Will Norris <will@tailscale.com>
2023-08-18 14:57:08 -07:00
KevinLiang10
93cab56277 wgengine/router: fall back and set iptables as default again
Due to the conflict between our nftables implementation and ufw, which is a common utility used
on linux. We now want to take a step back to prevent regression. This will give us more chance to
let users to test our nftables support and heuristic.

Updates: #391
Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
2023-08-18 16:33:06 -04:00
Brad Fitzpatrick
6e57dee7eb cmd/viewer, types/views, all: un-special case slice of netip.Prefix
Make it just a views.Slice[netip.Prefix] instead of its own named type.

Having the special case led to circular dependencies in another WIP PR
of mine.

Updates #8948

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-18 12:27:44 -07:00
Brad Fitzpatrick
261cc498d3 types/views: add LenIter method to slice view types
This is basically https://github.com/bradfitz/iter which was
a joke but now that Go's adding range over int soonish, might
as well. It simplies our code elsewher that uses slice views.

Updates #8948

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-18 08:21:52 -07:00
Brad Fitzpatrick
af2e4909b6 all: remove some Debug fields, NetworkMap.Debug, Reconfig Debug arg
Updates #8923

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-17 19:04:30 -07:00
Andrew Lytvynov
86ad1ea60e clientupdate: parse /etc/synoinfo.conf to get CPU arch (#8940)
The hardware version in `/proc/sys/kernel/syno_hw_version` does not map
exactly to versions in
https://github.com/SynoCommunity/spksrc/wiki/Synology-and-SynoCommunity-Package-Architectures.
It contains some slightly different version formats.

Instead, `/etc/synoinfo.conf` exists and contains a `unique` line with
the CPU architecture encoded. Parse that out and filter through the list
of architectures that we have SPKs for.

Tested on DS218 and DS413j.

Updates #8927

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2023-08-17 16:45:50 -07:00
Marwan Sulaiman
72d2122cad cmd/tailscale: change serve and funnel calls to StatusWithoutPeers
The tailscale serve|funnel commands frequently call the LocalBackend's Status
but they never need the peers to be included. This PR changes the call to be
StatusWithoutPeers which should gain a noticeable speed improvement

Updates #8489

Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
2023-08-17 17:01:43 -04:00
Brad Fitzpatrick
121d1d002c tailcfg: add nodeAttrs for forcing OneCGNAT on/off [capver 71]
Updates #8923

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-17 13:32:12 -07:00
Brad Fitzpatrick
25663b1307 tailcfg: remove most Debug fields, move bulk to nodeAttrs [capver 70]
Now a nodeAttr: ForceBackgroundSTUN, DERPRoute, TrimWGConfig,
DisableSubnetsIfPAC, DisableUPnP.

Kept support for, but also now a NodeAttr: RandomizeClientPort.

Removed: SetForceBackgroundSTUN, SetRandomizeClientPort (both never
used, sadly... never got around to them. But nodeAttrs are better
anyway), EnableSilentDisco (will be a nodeAttr later when that effort
resumes).

Updates #8923

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-17 10:52:47 -07:00
David Anderson
e92adfe5e4 net/art: allow non-pointers as values
Values are still turned into pointers internally to maintain the
invariants of strideTable, but from the user's perspective it's
now possible to tbl.Insert(pfx, true) rather than
tbl.Insert(pfx, ptr.To(true)).

Updates #7781

Signed-off-by: David Anderson <danderson@tailscale.com>
2023-08-17 10:43:18 -07:00
Brad Fitzpatrick
bc0eb6b914 all: import x/exp/maps as xmaps to distinguish from Go 1.21 "maps"
Updates #8419

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-17 09:54:18 -07:00
Brad Fitzpatrick
e8551d6b40 all: use Go 1.21 slices, maps instead of x/exp/{slices,maps}
Updates #8419

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-17 08:42:35 -07:00
Denton Gentry
e8d140654a cmd/derper: count bootstrap dns unique lookups.
Updates https://github.com/tailscale/corp/issues/13979

Signed-off-by: Denton Gentry <dgentry@tailscale.com>
2023-08-17 08:02:56 -07:00
Denton Gentry
7e15c78a5a syncs: add map.Clear() method
Updates https://github.com/tailscale/corp/issues/13979

Signed-off-by: Denton Gentry <dgentry@tailscale.com>
2023-08-17 08:02:56 -07:00
Brad Fitzpatrick
239ad57446 tailcfg: move LogHeapPprof from Debug to c2n [capver 69]
And delete Debug.GoroutineDumpURL, which was already in c2n.

Updates #8923

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-16 20:35:04 -07:00
Maisem Ali
24509f8b22 cmd/k8s-operator: add support for control plane assigned groups
Previously we would use the Impersonate-Group header to pass through
tags to the k8s api server. However, we would do nothing for non-tagged
nodes. Now that we have a way to specify these via peerCaps respect those
and send down groups for non-tagged nodes as well.

For tagged nodes, it defaults to sending down the tags as groups to retain
legacy behavior if there are no caps set. Otherwise, the tags are omitted.

Updates #5055

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2023-08-16 19:40:47 -04:00
Brad Fitzpatrick
0913ec023b CODEOWNERS: add the start of an owners file
Updates tailscale/corp#13972

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-16 15:57:29 -07:00
Brad Fitzpatrick
b090d61c0f tailcfg: rename prototype field to reflect its status
(Added earlier today in #8916, 57da1f150)

Updates tailscale/corp#13969

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2023-08-16 15:34:51 -07:00
175 changed files with 6302 additions and 3324 deletions

View File

@@ -0,0 +1,17 @@
on:
workflow_dispatch:
push:
tags:
- '^v[0-9]+\.[0-9]*[02468]+\.[0-9]+$'
jobs:
publish:
runs-on: "ubuntu-latest"
permissions:
id-token: "write"
contents: "read"
steps:
- uses: "actions/checkout@v3"
- uses: "DeterminateSystems/nix-installer-action@main"
- uses: "DeterminateSystems/flakehub-push@main"
with:
visibility: "public"

1
CODEOWNERS Normal file
View File

@@ -0,0 +1 @@
/tailcfg/ @tailscale/control-protocol-owners

View File

@@ -4,13 +4,13 @@
package apitype
type DNSConfig struct {
Resolvers []DNSResolver `json:"resolvers"`
FallbackResolvers []DNSResolver `json:"fallbackResolvers"`
Routes map[string][]DNSResolver `json:"routes"`
Domains []string `json:"domains"`
Nameservers []string `json:"nameservers"`
Proxied bool `json:"proxied"`
DNSFilterURL string `json:"DNSFilterURL"`
Resolvers []DNSResolver `json:"resolvers"`
FallbackResolvers []DNSResolver `json:"fallbackResolvers"`
Routes map[string][]DNSResolver `json:"routes"`
Domains []string `json:"domains"`
Nameservers []string `json:"nameservers"`
Proxied bool `json:"proxied"`
TempCorpIssue13969 string `json:"TempCorpIssue13969,omitempty"`
}
type DNSResolver struct {

View File

@@ -1057,6 +1057,29 @@ func (lc *LocalClient) NetworkLockDisable(ctx context.Context, secret []byte) er
return nil
}
// StreamServe returns an io.ReadCloser that streams serve/Funnel
// connections made to the provided HostPort.
//
// If Serve and Funnel were not already enabled for the HostPort in the ServeConfig,
// the backend enables it for the duration of the context's lifespan and
// then turns it back off once the context is closed. If either are already enabled,
// then they remain that way but logs are still streamed
func (lc *LocalClient) StreamServe(ctx context.Context, hp ipn.ServeStreamRequest) (io.ReadCloser, error) {
req, err := http.NewRequestWithContext(ctx, "POST", "http://"+apitype.LocalAPIHost+"/localapi/v0/stream-serve", jsonBody(hp))
if err != nil {
return nil, err
}
res, err := lc.doLocalRequestNiceError(req)
if err != nil {
return nil, err
}
if res.StatusCode != 200 {
res.Body.Close()
return nil, errors.New(res.Status)
}
return res.Body, nil
}
// GetServeConfig return the current serve config.
//
// If the serve config is empty, it returns (nil, nil).
@@ -1068,6 +1091,17 @@ func (lc *LocalClient) GetServeConfig(ctx context.Context) (*ipn.ServeConfig, er
return getServeConfigFromJSON(body)
}
// GetMemoryServeConfig return the current serve config.
//
// If the serve config is empty, it returns (nil, nil).
func (lc *LocalClient) GetMemoryServeConfig(ctx context.Context) (*ipn.ServeConfig, error) {
body, err := lc.send(ctx, "GET", "/localapi/v0/serve-config?memory=true", 200, nil)
if err != nil {
return nil, fmt.Errorf("getting serve config: %w", err)
}
return getServeConfigFromJSON(body)
}
func getServeConfigFromJSON(body []byte) (sc *ipn.ServeConfig, err error) {
if err := json.Unmarshal(body, &sc); err != nil {
return nil, err

41
client/web/api.go Normal file
View File

@@ -0,0 +1,41 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package web
import (
"net/http"
"strings"
"github.com/gorilla/csrf"
"tailscale.com/util/httpm"
)
type api struct {
s *Server
}
// ServeHTTP serves requests for the web client api.
// It should only be called by Server.ServeHTTP, via Server.apiHandler,
// which protects the handler using gorilla csrf.
func (a *api) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-CSRF-Token", csrf.Token(r))
user, err := authorize(w, r)
if err != nil {
return
}
path := strings.TrimPrefix(r.URL.Path, "/api")
switch path {
case "/data":
switch r.Method {
case httpm.GET:
a.s.serveGetNodeDataJSON(w, r, user)
case httpm.POST:
a.s.servePostNodeUpdate(w, r)
default:
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
}
return
}
http.Error(w, "invalid endpoint", http.StatusNotFound)
}

View File

@@ -1,57 +0,0 @@
<html>
<head>
<title>Redirecting...</title>
<style>
html,
body {
height: 100%;
}
html {
background-color: rgb(249, 247, 246);
font-family: ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
line-height: 1.5;
-webkit-text-size-adjust: 100%;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
body {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
}
.spinner {
margin-bottom: 2rem;
border: 4px rgba(112, 110, 109, 0.5) solid;
border-left-color: transparent;
border-radius: 9999px;
width: 4rem;
height: 4rem;
-webkit-animation: spin 700ms linear infinite;
animation: spin 800ms linear infinite;
}
.label {
color: rgb(112, 110, 109);
padding-left: 0.4rem;
}
@-webkit-keyframes spin {
to {
transform: rotate(360deg);
}
}
@keyframes spin {
to {
transform: rotate(360deg);
}
}
</style>
</head> <body>
<div class="spinner"></div>
<div class="label">Redirecting...</div>
</body>

View File

@@ -8,10 +8,12 @@
},
"private": true,
"dependencies": {
"classnames": "^2.3.1",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"@types/classnames": "^2.2.10",
"@types/react": "^18.0.20",
"@types/react-dom": "^18.0.6",
"@vitejs/plugin-react-swc": "^3.3.2",

32
client/web/src/api.ts Normal file
View File

@@ -0,0 +1,32 @@
let csrfToken: string
// apiFetch wraps the standard JS fetch function
// with csrf header management.
export function apiFetch(
input: RequestInfo | URL,
init?: RequestInit | undefined
): Promise<Response> {
return fetch(input, {
...init,
headers: withCsrfToken(init?.headers),
}).then((r) => {
updateCsrfToken(r)
if (!r.ok) {
return r.text().then((err) => {
throw new Error(err)
})
}
return r
})
}
function withCsrfToken(h?: HeadersInit): HeadersInit {
return { ...h, "X-CSRF-Token": csrfToken }
}
function updateCsrfToken(r: Response) {
const tok = r.headers.get("X-CSRF-Token")
if (tok) {
csrfToken = tok
}
}

View File

@@ -3,7 +3,9 @@ import { Footer, Header, IP, State } from "src/components/legacy"
import useNodeData from "src/hooks/node-data"
export default function App() {
const data = useNodeData()
// TODO(sonia): use isPosting value from useNodeData
// to fill loading states.
const { data, updateNode } = useNodeData()
return (
<div className="py-14">
@@ -13,9 +15,9 @@ export default function App() {
) : (
<>
<main className="container max-w-lg mx-auto mb-8 py-6 px-8 bg-white rounded-md shadow-2xl">
<Header data={data} />
<Header data={data} updateNode={updateNode} />
<IP data={data} />
<State data={data} />
<State data={data} updateNode={updateNode} />
</main>
<Footer data={data} />
</>

View File

@@ -1,14 +1,19 @@
import cx from "classnames"
import React from "react"
import { NodeData } from "src/hooks/node-data"
import { NodeData, NodeUpdate } from "src/hooks/node-data"
// TODO(tailscale/corp#13775): legacy.tsx contains a set of components
// that (crudely) implement the pre-2023 web client. These are implemented
// purely to ease migration to the new React-based web client, and will
// eventually be completely removed.
export function Header(props: { data: NodeData }) {
const { data } = props
export function Header({
data,
updateNode,
}: {
data: NodeData
updateNode: (update: NodeUpdate) => void
}) {
return (
<header className="flex justify-between items-center min-width-0 py-2 mb-8">
<svg
@@ -60,41 +65,52 @@ export function Header(props: { data: NodeData }) {
></circle>
</svg>
<div className="flex items-center justify-end space-x-2 w-2/3">
{data.Profile && (
<>
<div className="text-right w-full leading-4">
<h4 className="truncate leading-normal">
{data.Profile.LoginName}
</h4>
<div className="text-xs text-gray-500 text-right">
<a href="#" className="hover:text-gray-700 js-loginButton">
Switch account
</a>{" "}
|{" "}
<a href="#" className="hover:text-gray-700 js-loginButton">
Reauthenticate
</a>{" "}
|{" "}
<a href="#" className="hover:text-gray-700 js-logoutButton">
Logout
</a>
{data.Profile &&
data.Status !== "NoState" &&
data.Status !== "NeedsLogin" && (
<>
<div className="text-right w-full leading-4">
<h4 className="truncate leading-normal">
{data.Profile.LoginName}
</h4>
<div className="text-xs text-gray-500 text-right">
<button
onClick={() => updateNode({ Reauthenticate: true })}
className="hover:text-gray-700"
>
Switch account
</button>{" "}
|{" "}
<button
onClick={() => updateNode({ Reauthenticate: true })}
className="hover:text-gray-700"
>
Reauthenticate
</button>{" "}
|{" "}
<button
onClick={() => updateNode({ ForceLogout: true })}
className="hover:text-gray-700"
>
Logout
</button>
</div>
</div>
</div>
<div className="relative flex-shrink-0 w-8 h-8 rounded-full overflow-hidden">
{data.Profile.ProfilePicURL ? (
<div
className="w-8 h-8 flex pointer-events-none rounded-full bg-gray-200"
style={{
backgroundImage: `url(${data.Profile.ProfilePicURL})`,
backgroundSize: "cover",
}}
/>
) : (
<div className="w-8 h-8 flex pointer-events-none rounded-full border border-gray-400 border-dashed" />
)}
</div>
</>
)}
<div className="relative flex-shrink-0 w-8 h-8 rounded-full overflow-hidden">
{data.Profile.ProfilePicURL ? (
<div
className="w-8 h-8 flex pointer-events-none rounded-full bg-gray-200"
style={{
backgroundImage: `url(${data.Profile.ProfilePicURL})`,
backgroundSize: "cover",
}}
/>
) : (
<div className="w-8 h-8 flex pointer-events-none rounded-full border border-gray-400 border-dashed" />
)}
</div>
</>
)}
</div>
</header>
)
@@ -128,9 +144,9 @@ export function IP(props: { data: NodeData }) {
<line x1="6" y1="6" x2="6.01" y2="6"></line>
<line x1="6" y1="18" x2="6.01" y2="18"></line>
</svg>
<div>
<h4 className="font-semibold truncate mr-2">{data.DeviceName}</h4>
</div>
<h4 className="font-semibold truncate mr-2">
{data.DeviceName || "Your device"}
</h4>
</div>
<h5>{data.IP}</h5>
</div>
@@ -162,9 +178,13 @@ export function IP(props: { data: NodeData }) {
)
}
export function State(props: { data: NodeData }) {
const { data } = props
export function State({
data,
updateNode,
}: {
data: NodeData
updateNode: (update: NodeUpdate) => void
}) {
switch (data.Status) {
case "NeedsLogin":
case "NoState":
@@ -185,11 +205,12 @@ export function State(props: { data: NodeData }) {
.
</p>
</div>
<a href="#" className="mb-4 js-loginButton" target="_blank">
<button className="button button-blue w-full">
Reauthenticate
</button>
</a>
<button
onClick={() => updateNode({ Reauthenticate: true })}
className="button button-blue w-full mb-4"
>
Reauthenticate
</button>
</>
)
} else {
@@ -210,9 +231,12 @@ export function State(props: { data: NodeData }) {
.
</p>
</div>
<a href="#" className="mb-4 js-loginButton" target="_blank">
<button className="button button-blue w-full">Log In</button>
</a>
<button
onClick={() => updateNode({ Reauthenticate: true })}
className="button button-blue w-full mb-4"
>
Log In
</button>
</>
)
}
@@ -232,25 +256,20 @@ export function State(props: { data: NodeData }) {
device name or IP address above.
</p>
</div>
<div className="mb-4">
<a href="#" className="mb-4 js-advertiseExitNode">
{data.AdvertiseExitNode ? (
<button
className="button button-red button-medium"
id="enabled"
>
Stop advertising Exit Node
</button>
) : (
<button
className="button button-blue button-medium"
id="enabled"
>
Advertise as Exit Node
</button>
)}
</a>
</div>
<button
className={cx("button button-medium mb-4", {
"button-red": data.AdvertiseExitNode,
"button-blue": !data.AdvertiseExitNode,
})}
id="enabled"
onClick={() =>
updateNode({ AdvertiseExitNode: !data.AdvertiseExitNode })
}
>
{data.AdvertiseExitNode
? "Stop advertising Exit Node"
: "Advertise as Exit Node"}
</button>
</>
)
}

View File

@@ -1,4 +1,5 @@
import { useEffect, useState } from "react"
import { useCallback, useEffect, useState } from "react"
import { apiFetch } from "src/api"
export type NodeData = {
Profile: UserProfile
@@ -22,16 +23,101 @@ export type UserProfile = {
ProfilePicURL: string
}
export type NodeUpdate = {
AdvertiseRoutes?: string
AdvertiseExitNode?: boolean
Reauthenticate?: boolean
ForceLogout?: boolean
}
// useNodeData returns basic data about the current node.
export default function useNodeData() {
const [data, setData] = useState<NodeData>()
const [isPosting, setIsPosting] = useState<boolean>(false)
useEffect(() => {
fetch("/api/data")
.then((response) => response.json())
.then((json) => setData(json))
const fetchNodeData = useCallback(() => {
apiFetch("/api/data")
.then((r) => r.json())
.then((data) => setData(data))
.catch((error) => console.error(error))
}, [])
}, [setData])
return data
const updateNode = useCallback(
(update: NodeUpdate) => {
// The contents of this function are mostly copied over
// from the legacy client's web.html file.
// It makes all data updates through one API endpoint.
// As we build out the web client in React,
// this endpoint will eventually be deprecated.
if (isPosting || !data) {
return
}
setIsPosting(true)
update = {
...update,
// Default to current data value for any unset fields.
AdvertiseRoutes:
update.AdvertiseRoutes !== undefined
? update.AdvertiseRoutes
: data.AdvertiseRoutes,
AdvertiseExitNode:
update.AdvertiseExitNode !== undefined
? update.AdvertiseExitNode
: data.AdvertiseExitNode,
}
const urlParams = new URLSearchParams(window.location.search)
const nextParams = new URLSearchParams({ up: "true" })
const token = urlParams.get("SynoToken")
if (token) {
nextParams.set("SynoToken", token)
}
const search = nextParams.toString()
const url = `/api/data${search ? `?${search}` : ""}`
var body, contentType: string
if (data.IsUnraid) {
const params = new URLSearchParams()
params.append("csrf_token", data.UnraidToken)
params.append("ts_data", JSON.stringify(update))
body = params.toString()
contentType = "application/x-www-form-urlencoded;charset=UTF-8"
} else {
body = JSON.stringify(update)
contentType = "application/json"
}
apiFetch(url, {
method: "POST",
headers: { Accept: "application/json", "Content-Type": contentType },
body: body,
})
.then((r) => r.json())
.then((r) => {
setIsPosting(false)
const err = r["error"]
if (err) {
throw new Error(err)
}
const url = r["url"]
if (url) {
window.open(url, "_blank")
}
fetchNodeData()
})
.catch((err) => alert("Failed operation: " + err.message))
},
[data]
)
useEffect(
fetchNodeData,
// Initial data load.
[]
)
return { data, updateNode, isPosting }
}

View File

@@ -7,8 +7,9 @@ package web
import (
"bytes"
"context"
"crypto/rand"
"crypto/tls"
_ "embed"
"embed"
"encoding/json"
"encoding/xml"
"fmt"
@@ -23,6 +24,7 @@ import (
"os/exec"
"strings"
"github.com/gorilla/csrf"
"tailscale.com/client/tailscale"
"tailscale.com/envknob"
"tailscale.com/ipn"
@@ -31,20 +33,22 @@ import (
"tailscale.com/net/netutil"
"tailscale.com/tailcfg"
"tailscale.com/util/groupmember"
"tailscale.com/util/httpm"
"tailscale.com/version/distro"
)
//go:embed web.html
var webHTML string
// This contains all files needed to build the frontend assets.
// Because we assign this to the blank identifier, it does not actually embed the files.
// However, this does cause `go mod vendor` to include the files when vendoring the package.
// External packages that use the web client can `go mod vendor`, run `yarn build` to
// build the assets, then those asset bundles will be able to be embedded.
//
//go:embed yarn.lock index.html *.js *.json src/*
var _ embed.FS
//go:embed web.css
var webCSS string
//go:embed web.html web.css
var embeddedFS embed.FS
//go:embed auth-redirect.html
var authenticationRedirectHTML string
var tmpl *template.Template
var tmpls *template.Template
// Server is the backend server for a Tailscale web client.
type Server struct {
@@ -52,6 +56,8 @@ type Server struct {
devMode bool
devProxy *httputil.ReverseProxy // only filled when devMode is on
apiHandler http.Handler // csrf-protected api handler
}
// NewServer constructs a new Tailscale web client server.
@@ -70,13 +76,18 @@ func NewServer(devMode bool, lc *tailscale.LocalClient) (s *Server, cleanup func
if s.devMode {
cleanup = s.startDevServer()
s.addProxyToDevServer()
// Create new handler for "/api" requests.
// And protect with gorilla csrf.
csrfProtect := csrf.Protect(csrfKey())
s.apiHandler = csrfProtect(&api{s: s})
}
s.lc.IncrementCounter(context.Background(), "web_client_initialization", 1)
return s, cleanup
}
func init() {
tmpl = template.Must(template.New("web.html").Parse(webHTML))
template.Must(tmpl.New("web.css").Parse(webCSS))
tmpls = template.Must(template.New("").ParseFS(embeddedFS, "*"))
}
// authorize returns the name of the user accessing the web UI after verifying
@@ -271,19 +282,9 @@ req.send(null);
// ServeHTTP processes all requests for the Tailscale web client.
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if s.devMode {
if r.URL.Path == "/api/data" {
user, err := authorize(w, r)
if err != nil {
return
}
switch r.Method {
case httpm.GET:
s.serveGetNodeDataJSON(w, r, user)
case httpm.POST:
s.servePostNodeUpdate(w, r)
default:
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
}
if strings.HasPrefix(r.URL.Path, "/api/") {
// Pass through to other handlers via CSRF protection.
s.apiHandler.ServeHTTP(w, r)
return
}
// When in dev mode, proxy to the Vite dev server.
@@ -301,13 +302,11 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
switch {
case r.URL.Path == "/redirect" || r.URL.Path == "/redirect/":
io.WriteString(w, authenticationRedirectHTML)
return
case r.Method == "POST":
s.servePostNodeUpdate(w, r)
return
default:
s.lc.IncrementCounter(context.Background(), "web_client_page_load", 1)
s.serveGetNodeData(w, r, user)
return
}
@@ -380,7 +379,7 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request, user s
return
}
buf := new(bytes.Buffer)
if err := tmpl.Execute(buf, *data); err != nil {
if err := tmpls.ExecuteTemplate(buf, "web.html", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
@@ -527,3 +526,14 @@ func (s *Server) tailscaleUp(ctx context.Context, st *ipnstate.Status, postData
}
}
}
// csrfKey creates a new random csrf token.
// If an error surfaces during key creation,
// the error is logged and the active process terminated.
func csrfKey() []byte {
key := make([]byte, 32)
if _, err := rand.Read(key); err != nil {
log.Fatal("error generating CSRF key: %w", err)
}
return key
}

View File

@@ -543,6 +543,13 @@
resolved "https://registry.yarnpkg.com/@types/chai/-/chai-4.3.5.tgz#ae69bcbb1bebb68c4ac0b11e9d8ed04526b3562b"
integrity sha512-mEo1sAde+UCE6b2hxn332f1g1E8WfYRu6p5SvTKr2ZKC1f7gFJXk4h5PyGP9Dt6gCaG8y8XhwnXWC6Iy2cmBng==
"@types/classnames@^2.2.10":
version "2.3.1"
resolved "https://registry.yarnpkg.com/@types/classnames/-/classnames-2.3.1.tgz#3c2467aa0f1a93f1f021e3b9bcf938bd5dfdc0dd"
integrity sha512-zeOWb0JGBoVmlQoznvqXbE0tEC/HONsnoUNH19Hc96NFsTAwTXbTqb8FMYkru1F/iqp7a18Ws3nWJvtA1sHD1A==
dependencies:
classnames "*"
"@types/estree@^1.0.0":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@types/estree/-/estree-1.0.1.tgz#aa22750962f3bf0e79d753d3cc067f010c95f194"
@@ -798,6 +805,11 @@ chokidar@^3.5.3:
optionalDependencies:
fsevents "~2.3.2"
classnames@*, classnames@^2.3.1:
version "2.3.2"
resolved "https://registry.yarnpkg.com/classnames/-/classnames-2.3.2.tgz#351d813bf0137fcc6a76a16b88208d2560a0d924"
integrity sha512-CSbhY4cFEJRe6/GQzIk5qXZ4Jeg5pcsP7b5peFSDpffpe1cqjASH/n9UTjBwOp6XpMSTwQ8Za2K5V02ueA7Tmw==
color-convert@^1.9.0:
version "1.9.3"
resolved "https://registry.yarnpkg.com/color-convert/-/color-convert-1.9.3.tgz#bb71850690e1f136567de629d2d5471deda4c1e8"

View File

@@ -28,9 +28,7 @@ import (
"time"
"github.com/google/uuid"
"tailscale.com/hostinfo"
"tailscale.com/net/tshttpproxy"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/util/must"
"tailscale.com/util/winutil"
@@ -187,6 +185,8 @@ func (up *updater) confirm(ver string) bool {
return true
}
const synoinfoConfPath = "/etc/synoinfo.conf"
func (up *updater) updateSynology() error {
if up.Version != "" {
return errors.New("installing a specific version on Synology is not supported")
@@ -194,7 +194,7 @@ func (up *updater) updateSynology() error {
// Get the latest version and list of SPKs from pkgs.tailscale.com.
osName := fmt.Sprintf("dsm%d", distro.DSMVersion())
arch, err := synoArch(hostinfo.New())
arch, err := synoArch(runtime.GOARCH, synoinfoConfPath)
if err != nil {
return err
}
@@ -245,51 +245,62 @@ func (up *updater) updateSynology() error {
// synoArch returns the Synology CPU architecture matching one of the SPK
// architectures served from pkgs.tailscale.com.
func synoArch(hinfo *tailcfg.Hostinfo) (string, error) {
func synoArch(goArch, synoinfoPath string) (string, error) {
// Most Synology boxes just use a different arch name from GOARCH.
arch := map[string]string{
"amd64": "x86_64",
"386": "i686",
"arm64": "armv8",
}[hinfo.GoArch]
// Here's the fun part, some older ARM boxes require you to use SPKs
// specifically for their CPU.
//
// See https://github.com/SynoCommunity/spksrc/wiki/Synology-and-SynoCommunity-Package-Architectures
// for a complete list. Here, we override GOARCH for those older boxes that
// support at least DSM6.
//
// This is an artisanal hand-crafted list based on the wiki page. Some
// values may be wrong, since we don't have all those devices to actually
// test with.
switch hinfo.DeviceModel {
case "DS213air", "DS213", "DS413j",
"DS112", "DS112+", "DS212", "DS212+", "RS212", "RS812", "DS212j", "DS112j",
"DS111", "DS211", "DS211+", "DS411slim", "DS411", "RS411", "DS211j", "DS411j":
arch = "88f6281"
case "NVR1218", "NVR216", "VS960HD", "VS360HD":
arch = "hi3535"
case "DS1517", "DS1817", "DS416", "DS2015xs", "DS715", "DS1515", "DS215+":
arch = "alpine"
case "DS216se", "DS115j", "DS114", "DS214se", "DS414slim", "RS214", "DS14", "EDS14", "DS213j":
arch = "armada370"
case "DS115", "DS215j":
arch = "armada375"
case "DS419slim", "DS218j", "RS217", "DS116", "DS216j", "DS216", "DS416slim", "RS816", "DS416j":
arch = "armada38x"
case "RS815", "DS214", "DS214+", "DS414", "RS814":
arch = "armadaxp"
case "DS414j":
arch = "comcerto2k"
case "DS216play":
arch = "monaco"
}
}[goArch]
if arch == "" {
return "", fmt.Errorf("cannot determine CPU architecture for Synology model %q (Go arch %q), please report a bug at https://github.com/tailscale/tailscale/issues/new/choose", hinfo.DeviceModel, hinfo.GoArch)
// Here's the fun part, some older ARM boxes require you to use SPKs
// specifically for their CPU. See
// https://github.com/SynoCommunity/spksrc/wiki/Synology-and-SynoCommunity-Package-Architectures
// for a complete list.
//
// Some CPUs will map to neither this list nor the goArch map above, and we
// don't have SPKs for them.
cpu, err := parseSynoinfo(synoinfoPath)
if err != nil {
return "", fmt.Errorf("failed to get CPU architecture: %w", err)
}
switch cpu {
case "88f6281", "88f6282", "hi3535", "alpine", "armada370",
"armada375", "armada38x", "armadaxp", "comcerto2k", "monaco":
arch = cpu
default:
return "", fmt.Errorf("unsupported Synology CPU architecture %q (Go arch %q), please report a bug at https://github.com/tailscale/tailscale/issues/new/choose", cpu, goArch)
}
}
return arch, nil
}
func parseSynoinfo(path string) (string, error) {
f, err := os.Open(path)
if err != nil {
return "", err
}
defer f.Close()
// Look for a line like:
// unique="synology_88f6282_413j"
// Extract the CPU in the middle (88f6282 in the above example).
s := bufio.NewScanner(f)
for s.Scan() {
l := s.Text()
if !strings.HasPrefix(l, "unique=") {
continue
}
parts := strings.SplitN(l, "_", 3)
if len(parts) != 3 {
return "", fmt.Errorf(`malformed %q: found %q, expected format like 'unique="synology_$cpu_$model'`, path, l)
}
return parts[1], nil
}
return "", fmt.Errorf(`missing "unique=" field in %q`, path)
}
func (up *updater) updateDebLike() error {
ver, err := requestedTailscaleVersion(up.Version, up.track)
if err != nil {

View File

@@ -8,8 +8,6 @@ import (
"os"
"path/filepath"
"testing"
"tailscale.com/tailcfg"
)
func TestUpdateDebianAptSourcesListBytes(t *testing.T) {
@@ -446,29 +444,151 @@ tailscale installed size:
func TestSynoArch(t *testing.T) {
tests := []struct {
goarch string
model string
want string
wantErr bool
goarch string
synoinfoUnique string
want string
wantErr bool
}{
{goarch: "amd64", model: "DS224+", want: "x86_64"},
{goarch: "arm64", model: "DS124", want: "armv8"},
{goarch: "386", model: "DS415play", want: "i686"},
{goarch: "arm", model: "DS213air", want: "88f6281"},
{goarch: "arm", model: "NVR1218", want: "hi3535"},
{goarch: "arm", model: "DS1517", want: "alpine"},
{goarch: "arm", model: "DS216se", want: "armada370"},
{goarch: "arm", model: "DS115", want: "armada375"},
{goarch: "arm", model: "DS419slim", want: "armada38x"},
{goarch: "arm", model: "RS815", want: "armadaxp"},
{goarch: "arm", model: "DS414j", want: "comcerto2k"},
{goarch: "arm", model: "DS216play", want: "monaco"},
{goarch: "riscv64", model: "DS999", wantErr: true},
{goarch: "amd64", synoinfoUnique: "synology_x86_224", want: "x86_64"},
{goarch: "arm64", synoinfoUnique: "synology_armv8_124", want: "armv8"},
{goarch: "386", synoinfoUnique: "synology_i686_415play", want: "i686"},
{goarch: "arm", synoinfoUnique: "synology_88f6281_213air", want: "88f6281"},
{goarch: "arm", synoinfoUnique: "synology_88f6282_413j", want: "88f6282"},
{goarch: "arm", synoinfoUnique: "synology_hi3535_NVR1218", want: "hi3535"},
{goarch: "arm", synoinfoUnique: "synology_alpine_1517", want: "alpine"},
{goarch: "arm", synoinfoUnique: "synology_armada370_216se", want: "armada370"},
{goarch: "arm", synoinfoUnique: "synology_armada375_115", want: "armada375"},
{goarch: "arm", synoinfoUnique: "synology_armada38x_419slim", want: "armada38x"},
{goarch: "arm", synoinfoUnique: "synology_armadaxp_RS815", want: "armadaxp"},
{goarch: "arm", synoinfoUnique: "synology_comcerto2k_414j", want: "comcerto2k"},
{goarch: "arm", synoinfoUnique: "synology_monaco_216play", want: "monaco"},
{goarch: "ppc64", synoinfoUnique: "synology_qoriq_413", wantErr: true},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("%s-%s", tt.goarch, tt.model), func(t *testing.T) {
got, err := synoArch(&tailcfg.Hostinfo{GoArch: tt.goarch, DeviceModel: tt.model})
t.Run(fmt.Sprintf("%s-%s", tt.goarch, tt.synoinfoUnique), func(t *testing.T) {
synoinfoConfPath := filepath.Join(t.TempDir(), "synoinfo.conf")
if err := os.WriteFile(
synoinfoConfPath,
[]byte(fmt.Sprintf("unique=%q\n", tt.synoinfoUnique)),
0600,
); err != nil {
t.Fatal(err)
}
got, err := synoArch(tt.goarch, synoinfoConfPath)
if err != nil {
if !tt.wantErr {
t.Fatalf("got unexpected error %v", err)
}
return
}
if tt.wantErr {
t.Fatalf("got %q, expected an error", got)
}
if got != tt.want {
t.Errorf("got %q, want %q", got, tt.want)
}
})
}
}
func TestParseSynoinfo(t *testing.T) {
tests := []struct {
desc string
content string
want string
wantErr bool
}{
{
desc: "double-quoted",
content: `
company_title="Synology"
unique="synology_88f6281_213air"
`,
want: "88f6281",
},
{
desc: "single-quoted",
content: `
company_title="Synology"
unique='synology_88f6281_213air'
`,
want: "88f6281",
},
{
desc: "unquoted",
content: `
company_title="Synology"
unique=synology_88f6281_213air
`,
want: "88f6281",
},
{
desc: "missing unique",
content: `
company_title="Synology"
`,
wantErr: true,
},
{
desc: "empty unique",
content: `
company_title="Synology"
unique=
`,
wantErr: true,
},
{
desc: "empty unique double-quoted",
content: `
company_title="Synology"
unique=""
`,
wantErr: true,
},
{
desc: "empty unique single-quoted",
content: `
company_title="Synology"
unique=''
`,
wantErr: true,
},
{
desc: "malformed unique",
content: `
company_title="Synology"
unique="synology_88f6281"
`,
wantErr: true,
},
{
desc: "empty file",
content: ``,
wantErr: true,
},
{
desc: "empty lines and comments",
content: `
# In a file named synoinfo? Shocking!
company_title="Synology"
# unique= is_a_field_that_follows
unique="synology_88f6281_213air"
`,
want: "88f6281",
},
}
for _, tt := range tests {
t.Run(tt.desc, func(t *testing.T) {
synoinfoConfPath := filepath.Join(t.TempDir(), "synoinfo.conf")
if err := os.WriteFile(synoinfoConfPath, []byte(tt.content), 0600); err != nil {
t.Fatal(err)
}
got, err := parseSynoinfo(synoinfoConfPath)
if err != nil {
if !tt.wantErr {
t.Fatalf("got unexpected error %v", err)

View File

@@ -0,0 +1,338 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// Package distsign implements signature and validation of arbitrary
// distributable files.
//
// There are 3 parties in this exchange:
// - builder, which creates files, signs them with signing keys and publishes
// to server
// - server, which distributes public signing keys, files and signatures
// - client, which downloads files and signatures from server, and validates
// the signatures
//
// There are 2 types of keys:
// - signing keys, that sign individual distributable files on the builder
// - root keys, that sign signing keys and are kept offline
//
// root keys -(sign)-> signing keys -(sign)-> files
//
// All keys are asymmetric Ed25519 key pairs.
//
// The server serves static files under some known prefix. The kinds of files are:
// - distsign.pub - bundle of PEM-encoded public signing keys
// - distsign.pub.sig - signature of distsign.pub using one of the root keys
// - $file - any distributable file
// - $file.sig - signature of $file using any of the signing keys
//
// The root public keys are baked into the client software at compile time.
// These keys are long-lived and prove the validity of current signing keys
// from distsign.pub. To rotate root keys, a new client release must be
// published, they are not rotated dynamically. There are multiple root keys in
// different locations specifically to allow this rotation without using the
// discarded root key for any new signatures.
//
// The signing public keys are fetched by the client dynamically before every
// download and can be rotated more readily, assuming that most deployed
// clients trust the root keys used to issue fresh signing keys.
package distsign
import (
"crypto"
"crypto/ed25519"
"crypto/rand"
"encoding/binary"
"encoding/pem"
"errors"
"fmt"
"hash"
"io"
"net/http"
"net/url"
"os"
"github.com/hdevalence/ed25519consensus"
"golang.org/x/crypto/blake2s"
)
const (
pemTypePrivate = "PRIVATE KEY"
pemTypePublic = "PUBLIC KEY"
downloadSizeLimit = 1 << 29 // 512MB
signingKeysSizeLimit = 1 << 20 // 1MB
signatureSizeLimit = ed25519.SignatureSize
)
// GenerateKey generates a new key pair and encodes it as PEM.
func GenerateKey() (priv, pub []byte, err error) {
pub, priv, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
return nil, nil, err
}
return pem.EncodeToMemory(&pem.Block{
Type: pemTypePrivate,
Bytes: []byte(priv),
}), pem.EncodeToMemory(&pem.Block{
Type: pemTypePublic,
Bytes: []byte(pub),
}), nil
}
// RootKey is a root key Signer used to sign signing keys.
type RootKey Signer
// SignSigningKeys signs the bundle of public signing keys. The bundle must be
// a sequence of PEM blocks joined with newlines.
func (s *RootKey) SignSigningKeys(pubBundle []byte) ([]byte, error) {
return s.Sign(nil, pubBundle, crypto.Hash(0))
}
// SigningKey is a signing key Signer used to sign packages.
type SigningKey Signer
// SignPackageHash signs the hash and the length of a package. Use PackageHash
// to compute the inputs.
func (s SigningKey) SignPackageHash(hash []byte, len int64) ([]byte, error) {
if len <= 0 {
return nil, fmt.Errorf("package length must be positive, got %d", len)
}
msg := binary.LittleEndian.AppendUint64(hash, uint64(len))
return s.Sign(nil, msg, crypto.Hash(0))
}
// PackageHash is a hash.Hash that counts the number of bytes written. Use it
// to get the hash and length inputs to SigningKey.SignPackageHash.
type PackageHash struct {
hash.Hash
len int64
}
// NewPackageHash returns an initialized PackageHash using BLAKE2s.
func NewPackageHash() *PackageHash {
h, err := blake2s.New256(nil)
if err != nil {
// Should never happen with a nil key passed to blake2s.
panic(err)
}
return &PackageHash{Hash: h}
}
func (ph *PackageHash) Write(b []byte) (int, error) {
ph.len += int64(len(b))
return ph.Hash.Write(b)
}
// Reset the PackageHash to its initial state.
func (ph *PackageHash) Reset() {
ph.len = 0
ph.Hash.Reset()
}
// Len returns the total number of bytes written.
func (ph *PackageHash) Len() int64 { return ph.len }
// Signer is crypto.Signer using a single key (root or signing).
type Signer struct {
crypto.Signer
}
// NewSigner parses the PEM-encoded private key stored in the file named
// privKeyPath and creates a Signer for it. The key is expected to be in the
// same format as returned by GenerateKey.
func NewSigner(privKeyPath string) (Signer, error) {
raw, err := os.ReadFile(privKeyPath)
if err != nil {
return Signer{}, err
}
k, err := parsePrivateKey(raw)
if err != nil {
return Signer{}, fmt.Errorf("failed to parse %q: %w", privKeyPath, err)
}
return Signer{Signer: k}, nil
}
// Client downloads and validates files from a distribution server.
type Client struct {
roots []ed25519.PublicKey
pkgsAddr *url.URL
}
// NewClient returns a new client for distribution server located at pkgsAddr,
// and uses embedded root keys from the roots/ subdirectory of this package.
func NewClient(pkgsAddr string) (*Client, error) {
u, err := url.Parse(pkgsAddr)
if err != nil {
return nil, fmt.Errorf("invalid pkgsAddr %q: %w", pkgsAddr, err)
}
return &Client{roots: roots(), pkgsAddr: u}, nil
}
func (c *Client) url(path string) string {
return c.pkgsAddr.JoinPath(path).String()
}
// Download fetches a file at path srcPath from pkgsAddr passed in NewClient.
// The file is downloaded to dstPath and its signature is validated using the
// embedded root keys. Download returns an error if anything goes wrong with
// the actual file download or with signature validation.
func (c *Client) Download(srcPath, dstPath string) error {
// Always fetch a fresh signing key.
sigPub, err := c.signingKeys()
if err != nil {
return err
}
srcURL := c.url(srcPath)
sigURL := srcURL + ".sig"
dstPathUnverified := dstPath + ".unverified"
hash, len, err := download(srcURL, dstPathUnverified, downloadSizeLimit)
if err != nil {
return err
}
sig, err := fetch(sigURL, signatureSizeLimit)
if err != nil {
// Best-effort clean up of downloaded package.
os.Remove(dstPathUnverified)
return err
}
msg := binary.LittleEndian.AppendUint64(hash, uint64(len))
if !verifyAny(sigPub, msg, sig) {
// Best-effort clean up of downloaded package.
os.Remove(dstPathUnverified)
return fmt.Errorf("signature %q for key %q does not validate with the current release signing key; either you are under attack, or attempting to download an old version of Tailscale which was signed with an older signing key", sigURL, srcURL)
}
if err := os.Rename(dstPathUnverified, dstPath); err != nil {
return fmt.Errorf("failed to move %q to %q after signature validation", dstPathUnverified, dstPath)
}
return nil
}
// signingKeys fetches current signing keys from the server and validates them
// against the roots. Should be called before validation of any downloaded file
// to get the fresh keys.
func (c *Client) signingKeys() ([]ed25519.PublicKey, error) {
keyURL := c.url("distsign.pub")
sigURL := keyURL + ".sig"
raw, err := fetch(keyURL, signingKeysSizeLimit)
if err != nil {
return nil, err
}
sig, err := fetch(sigURL, signatureSizeLimit)
if err != nil {
return nil, err
}
if !verifyAny(c.roots, raw, sig) {
return nil, fmt.Errorf("signature %q for key %q does not validate with any known root key; either you are under attack, or running a very old version of Tailscale with outdated root keys", sigURL, keyURL)
}
// Parse the bundle of public signing keys.
var keys []ed25519.PublicKey
for len(raw) > 0 {
pub, rest, err := parsePublicKey(raw)
if err != nil {
return nil, err
}
keys = append(keys, pub)
raw = rest
}
if len(keys) == 0 {
return nil, fmt.Errorf("no signing keys found at %q", keyURL)
}
return keys, nil
}
// fetch reads the response body from url into memory, up to limit bytes.
func fetch(url string, limit int64) ([]byte, error) {
resp, err := http.Get(url)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return io.ReadAll(io.LimitReader(resp.Body, limit))
}
// download writes the response body of url into a local file at dst, up to
// limit bytes. On success, the returned value is a BLAKE2s hash of the file.
func download(url, dst string, limit int64) ([]byte, int64, error) {
resp, err := http.Get(url)
if err != nil {
return nil, 0, err
}
defer resp.Body.Close()
h := NewPackageHash()
r := io.TeeReader(io.LimitReader(resp.Body, limit), h)
f, err := os.Create(dst)
if err != nil {
return nil, 0, err
}
defer f.Close()
if _, err := io.Copy(f, r); err != nil {
return nil, 0, err
}
if err := f.Close(); err != nil {
return nil, 0, err
}
return h.Sum(nil), h.Len(), nil
}
func parsePrivateKey(data []byte) (ed25519.PrivateKey, error) {
b, rest := pem.Decode(data)
if b == nil {
return nil, errors.New("failed to decode PEM data")
}
if len(rest) > 0 {
return nil, errors.New("trailing PEM data")
}
if b.Type != pemTypePrivate {
return nil, fmt.Errorf("PEM type is %q, want %q", b.Type, pemTypePrivate)
}
if len(b.Bytes) != ed25519.PrivateKeySize {
return nil, errors.New("private key has incorrect length for an Ed25519 private key")
}
return ed25519.PrivateKey(b.Bytes), nil
}
func parseSinglePublicKey(data []byte) (ed25519.PublicKey, error) {
pub, rest, err := parsePublicKey(data)
if err != nil {
return nil, err
}
if len(rest) > 0 {
return nil, errors.New("trailing PEM data")
}
return pub, err
}
func parsePublicKey(data []byte) (pub ed25519.PublicKey, rest []byte, retErr error) {
b, rest := pem.Decode(data)
if b == nil {
return nil, nil, errors.New("failed to decode PEM data")
}
if b.Type != pemTypePublic {
return nil, nil, fmt.Errorf("PEM type is %q, want %q", b.Type, pemTypePublic)
}
if len(b.Bytes) != ed25519.PublicKeySize {
return nil, nil, errors.New("public key has incorrect length for an Ed25519 public key")
}
return ed25519.PublicKey(b.Bytes), rest, nil
}
// verifyAny verifies whether sig is valid for msg using any of the keys.
// verifyAny will panic of any of the keys have the wrong size for Ed25519.
func verifyAny(keys []ed25519.PublicKey, msg, sig []byte) bool {
for _, k := range keys {
if ed25519consensus.Verify(k, msg, sig) {
return true
}
}
return false
}

View File

@@ -0,0 +1,347 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package distsign
import (
"bytes"
"crypto/ed25519"
"net/http"
"net/http/httptest"
"net/url"
"os"
"path/filepath"
"strings"
"testing"
"golang.org/x/crypto/blake2s"
)
func TestDownload(t *testing.T) {
srv := newTestServer(t)
c := srv.client(t)
tests := []struct {
desc string
before func(*testing.T)
src string
want []byte
wantErr bool
}{
{
desc: "missing file",
before: func(*testing.T) {},
src: "hello",
wantErr: true,
},
{
desc: "success",
before: func(*testing.T) {
srv.addSigned("hello", []byte("world"))
},
src: "hello",
want: []byte("world"),
},
{
desc: "no signature",
before: func(*testing.T) {
srv.add("hello", []byte("world"))
},
src: "hello",
wantErr: true,
},
{
desc: "bad signature",
before: func(*testing.T) {
srv.add("hello", []byte("world"))
srv.add("hello.sig", []byte("potato"))
},
src: "hello",
wantErr: true,
},
{
desc: "signed with untrusted key",
before: func(t *testing.T) {
srv.add("hello", []byte("world"))
srv.add("hello.sig", newSigningKeyPair(t).sign([]byte("world")))
},
src: "hello",
wantErr: true,
},
{
desc: "signed with root key",
before: func(t *testing.T) {
srv.add("hello", []byte("world"))
srv.add("hello.sig", srv.roots[0].sign([]byte("world")))
},
src: "hello",
wantErr: true,
},
{
desc: "bad signing key signature",
before: func(t *testing.T) {
srv.add("distsign.pub.sig", []byte("potato"))
srv.addSigned("hello", []byte("world"))
},
src: "hello",
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.desc, func(t *testing.T) {
srv.reset()
tt.before(t)
dst := filepath.Join(t.TempDir(), tt.src)
t.Cleanup(func() {
os.Remove(dst)
})
err := c.Download(tt.src, dst)
if err != nil {
if tt.wantErr {
return
}
t.Fatalf("unexpected error from Download(%q): %v", tt.src, err)
}
if tt.wantErr {
t.Fatalf("Download(%q) succeeded, expected an error", tt.src)
}
got, err := os.ReadFile(dst)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(tt.want, got) {
t.Errorf("Download(%q): got %q, want %q", tt.src, got, tt.want)
}
})
}
}
func TestRotateRoot(t *testing.T) {
srv := newTestServer(t)
c1 := srv.client(t)
srv.addSigned("hello", []byte("world"))
if err := c1.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed on a fresh server: %v", err)
}
// Remove first root and replace it with a new key.
srv.roots = append(srv.roots[1:], newRootKeyPair(t))
// Old client can still download files because it still trusts the old
// root key.
if err := c1.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after root rotation on old client: %v", err)
}
// New client should fail download because current signing key is signed by
// the revoked root that new client doesn't trust.
c2 := srv.client(t)
if err := c2.Download("hello", filepath.Join(t.TempDir(), "hello")); err == nil {
t.Fatalf("Download succeeded on new client, but signing key is signed with revoked root key")
}
// Re-sign signing key with another valid root that client still trusts.
srv.resignSigningKeys()
// Both old and new clients should now be able to download.
//
// Note: we don't need to re-sign the "hello" file because signing key
// didn't change (only signing key's signature).
if err := c1.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after root rotation on old client with re-signed signing key: %v", err)
}
if err := c2.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after root rotation on new client with re-signed signing key: %v", err)
}
}
func TestRotateSigning(t *testing.T) {
srv := newTestServer(t)
c := srv.client(t)
srv.addSigned("hello", []byte("world"))
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed on a fresh server: %v", err)
}
// Replace signing key but don't publish it yet.
srv.sign = append(srv.sign, newSigningKeyPair(t))
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after new signing key added but before publishing it: %v", err)
}
// Publish new signing key bundle with both keys.
srv.resignSigningKeys()
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after new signing key was published: %v", err)
}
// Re-sign the "hello" file with new signing key.
srv.add("hello.sig", srv.sign[1].sign([]byte("world")))
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after re-signing with new signing key: %v", err)
}
// Drop the old signing key.
srv.sign = srv.sign[1:]
srv.resignSigningKeys()
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after removing old signing key: %v", err)
}
// Add another key and re-sign the file with it *before* publishing.
srv.sign = append(srv.sign, newSigningKeyPair(t))
srv.add("hello.sig", srv.sign[1].sign([]byte("world")))
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err == nil {
t.Fatalf("Download succeeded when signed with a not-yet-published signing key")
}
// Fix this by publishing the new key.
srv.resignSigningKeys()
if err := c.Download("hello", filepath.Join(t.TempDir(), "hello")); err != nil {
t.Fatalf("Download failed after publishing new signing key: %v", err)
}
}
type testServer struct {
roots []rootKeyPair
sign []signingKeyPair
files map[string][]byte
srv *httptest.Server
}
func newTestServer(t *testing.T) *testServer {
var roots []rootKeyPair
for i := 0; i < 3; i++ {
roots = append(roots, newRootKeyPair(t))
}
ts := &testServer{
roots: roots,
sign: []signingKeyPair{newSigningKeyPair(t)},
}
ts.reset()
ts.srv = httptest.NewServer(ts)
t.Cleanup(ts.srv.Close)
return ts
}
func (s *testServer) client(t *testing.T) *Client {
roots := make([]ed25519.PublicKey, 0, len(s.roots))
for _, r := range s.roots {
pub, err := parseSinglePublicKey(r.pubRaw)
if err != nil {
t.Fatalf("parsePublicKey: %v", err)
}
roots = append(roots, pub)
}
u, err := url.Parse(s.srv.URL)
if err != nil {
t.Fatal(err)
}
return &Client{
roots: roots,
pkgsAddr: u,
}
}
func (s *testServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
path := strings.TrimPrefix(r.URL.Path, "/")
data, ok := s.files[path]
if !ok {
http.NotFound(w, r)
return
}
w.Write(data)
}
func (s *testServer) addSigned(name string, data []byte) {
s.files[name] = data
s.files[name+".sig"] = s.sign[0].sign(data)
}
func (s *testServer) add(name string, data []byte) {
s.files[name] = data
}
func (s *testServer) reset() {
s.files = make(map[string][]byte)
s.resignSigningKeys()
}
func (s *testServer) resignSigningKeys() {
var pubs [][]byte
for _, k := range s.sign {
pubs = append(pubs, k.pubRaw)
}
bundle := bytes.Join(pubs, []byte("\n"))
sig := s.roots[0].sign(bundle)
s.files["distsign.pub"] = bundle
s.files["distsign.pub.sig"] = sig
}
type rootKeyPair struct {
*RootKey
keyPair
}
func newRootKeyPair(t *testing.T) rootKeyPair {
kp := newKeyPair(t)
priv, err := parsePrivateKey(kp.privRaw)
if err != nil {
t.Fatalf("parsePrivateKey: %v", err)
}
return rootKeyPair{
RootKey: &RootKey{Signer: priv},
keyPair: kp,
}
}
func (s rootKeyPair) sign(bundle []byte) []byte {
sig, err := s.SignSigningKeys(bundle)
if err != nil {
panic(err)
}
return sig
}
type signingKeyPair struct {
*SigningKey
keyPair
}
func newSigningKeyPair(t *testing.T) signingKeyPair {
kp := newKeyPair(t)
priv, err := parsePrivateKey(kp.privRaw)
if err != nil {
t.Fatalf("parsePrivateKey: %v", err)
}
return signingKeyPair{
SigningKey: &SigningKey{Signer: priv},
keyPair: kp,
}
}
func (s signingKeyPair) sign(blob []byte) []byte {
hash := blake2s.Sum256(blob)
sig, err := s.SignPackageHash(hash[:], int64(len(blob)))
if err != nil {
panic(err)
}
return sig
}
type keyPair struct {
privRaw []byte
pubRaw []byte
}
func newKeyPair(t *testing.T) keyPair {
privRaw, pubRaw, err := GenerateKey()
if err != nil {
t.Fatalf("GenerateKey: %v", err)
}
return keyPair{
privRaw: privRaw,
pubRaw: pubRaw,
}
}

View File

@@ -0,0 +1,54 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package distsign
import (
"crypto/ed25519"
"embed"
"errors"
"fmt"
"path"
"path/filepath"
"sync"
)
//go:embed roots
var rootsFS embed.FS
var roots = sync.OnceValue(func() []ed25519.PublicKey {
roots, err := parseRoots()
if err != nil {
panic(err)
}
return roots
})
func parseRoots() ([]ed25519.PublicKey, error) {
files, err := rootsFS.ReadDir("roots")
if err != nil {
return nil, err
}
var keys []ed25519.PublicKey
for _, f := range files {
if !f.Type().IsRegular() {
continue
}
if filepath.Ext(f.Name()) != ".pub" {
continue
}
raw, err := rootsFS.ReadFile(path.Join("roots", f.Name()))
if err != nil {
return nil, err
}
key, err := parseSinglePublicKey(raw)
if err != nil {
return nil, fmt.Errorf("parsing root key %q: %w", f.Name(), err)
}
keys = append(keys, key)
}
if len(keys) == 0 {
return nil, errors.New("no embedded root keys, please check clientupdate/distsign/roots/")
}
return keys, nil
}

View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
JNBgo4EFQ+DpRcESM2xU19xQWGffvLcmxtBMT4I+Qo0=
-----END PUBLIC KEY-----

View File

@@ -0,0 +1,16 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package distsign
import "testing"
func TestParseRoots(t *testing.T) {
roots, err := parseRoots()
if err != nil {
t.Fatal(err)
}
if len(roots) == 0 {
t.Error("parseRoots returned no root keys")
}
}

View File

@@ -126,8 +126,8 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("for i := range dst.%s {", fname)
if ptr, isPtr := ft.Elem().(*types.Pointer); isPtr {
if _, isBasic := ptr.Elem().Underlying().(*types.Basic); isBasic {
writef("\tx := *src.%s[i]", fname)
writef("\tdst.%s[i] = &x", fname)
it.Import("tailscale.com/types/ptr")
writef("\tdst.%s[i] = ptr.To(*src.%s[i])", fname, fname)
} else {
writef("\tdst.%s[i] = src.%s[i].Clone()", fname, fname)
}
@@ -145,41 +145,41 @@ func gen(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named) {
writef("dst.%s = src.%s.Clone()", fname, fname)
continue
}
n := it.QualifiedName(ft.Elem())
it.Import("tailscale.com/types/ptr")
writef("if dst.%s != nil {", fname)
writef("\tdst.%s = new(%s)", fname, n)
writef("\t*dst.%s = *src.%s", fname, fname)
writef("\tdst.%s = ptr.To(*src.%s)", fname, fname)
if codegen.ContainsPointers(ft.Elem()) {
writef("\t" + `panic("TODO pointers in pointers")`)
}
writef("}")
case *types.Map:
elem := ft.Elem()
writef("if dst.%s != nil {", fname)
writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem))
if sliceType, isSlice := elem.(*types.Slice); isSlice {
n := it.QualifiedName(sliceType.Elem())
writef("if dst.%s != nil {", fname)
writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem))
writef("\tfor k := range src.%s {", fname)
// use zero-length slice instead of nil to ensure
// the key is always copied.
writef("\t\tdst.%s[k] = append([]%s{}, src.%s[k]...)", fname, n, fname)
writef("\t}")
writef("}")
} else if codegen.ContainsPointers(elem) {
writef("if dst.%s != nil {", fname)
writef("\tdst.%s = map[%s]%s{}", fname, it.QualifiedName(ft.Key()), it.QualifiedName(elem))
writef("\tfor k, v := range src.%s {", fname)
switch elem.(type) {
case *types.Pointer:
writef("\t\tdst.%s[k] = v.Clone()", fname)
default:
writef("\t\tv2 := v.Clone()")
writef("\t\tdst.%s[k] = *v2", fname)
writef("\t\tdst.%s[k] = *(v.Clone())", fname)
}
writef("\t}")
writef("}")
} else {
writef("\tfor k, v := range src.%s {", fname)
writef("\t\tdst.%s[k] = v", fname)
writef("\t}")
it.Import("maps")
writef("\tdst.%s = maps.Clone(src.%s)", fname, fname)
}
writef("}")
default:
writef(`panic("TODO: %s (%T)")`, fname, ft)
}

View File

@@ -266,9 +266,9 @@ authLoop:
log.Fatalf("installing proxy rules: %v", err)
}
}
deviceInfo := []any{n.NetMap.SelfNode.StableID, n.NetMap.SelfNode.Name}
deviceInfo := []any{n.NetMap.SelfNode.StableID(), n.NetMap.SelfNode.Name()}
if cfg.InKubernetes && cfg.KubernetesCanPatch && cfg.KubeSecret != "" && deephash.Update(&currentDeviceInfo, &deviceInfo) {
if err := storeDeviceInfo(ctx, cfg.KubeSecret, n.NetMap.SelfNode.StableID, n.NetMap.SelfNode.Name); err != nil {
if err := storeDeviceInfo(ctx, cfg.KubeSecret, n.NetMap.SelfNode.StableID(), n.NetMap.SelfNode.Name()); err != nil {
log.Fatalf("storing device ID in kube secret: %v", err)
}
}

View File

@@ -112,10 +112,10 @@ func TestContainerBoot(t *testing.T) {
runningNotify := &ipn.Notify{
State: ptr.To(ipn.Running),
NetMap: &netmap.NetworkMap{
SelfNode: &tailcfg.Node{
SelfNode: (&tailcfg.Node{
StableID: tailcfg.StableNodeID("myID"),
Name: "test-node.test.ts.net",
},
}).View(),
Addresses: []netip.Prefix{netip.MustParsePrefix("100.64.0.1/32")},
},
}
@@ -482,10 +482,10 @@ func TestContainerBoot(t *testing.T) {
Notify: &ipn.Notify{
State: ptr.To(ipn.Running),
NetMap: &netmap.NetworkMap{
SelfNode: &tailcfg.Node{
SelfNode: (&tailcfg.Node{
StableID: tailcfg.StableNodeID("newID"),
Name: "new-name.test.ts.net",
},
}).View(),
Addresses: []netip.Prefix{netip.MustParsePrefix("100.64.0.1/32")},
},
},

View File

@@ -25,6 +25,7 @@ var (
dnsCache syncs.AtomicValue[dnsEntryMap]
dnsCacheBytes syncs.AtomicValue[[]byte] // of JSON
unpublishedDNSCache syncs.AtomicValue[dnsEntryMap]
bootstrapLookupMap syncs.Map[string, bool]
)
var (
@@ -35,6 +36,12 @@ var (
unpublishedDNSMisses = expvar.NewInt("counter_bootstrap_dns_unpublished_misses")
)
func init() {
expvar.Publish("counter_bootstrap_dns_queried_domains", expvar.Func(func() any {
return bootstrapLookupMap.Len()
}))
}
func refreshBootstrapDNSLoop() {
if *bootstrapDNS == "" && *unpublishedDNS == "" {
return
@@ -107,6 +114,7 @@ func handleBootstrapDNS(w http.ResponseWriter, r *http.Request) {
// Try answering a query from our hidden map first
if q := r.URL.Query().Get("q"); q != "" {
bootstrapLookupMap.Store(q, true)
if ips, ok := unpublishedDNSCache.Load()[q]; ok && len(ips) > 0 {
unpublishedDNSHits.Add(1)

View File

@@ -98,6 +98,7 @@ func resetMetrics() {
publishedDNSMisses.Set(0)
unpublishedDNSHits.Set(0)
unpublishedDNSMisses.Set(0)
bootstrapLookupMap.Clear()
}
// Verify that we don't count an empty list in the unpublishedDNSCache as a
@@ -148,4 +149,17 @@ func TestUnpublishedDNSEmptyList(t *testing.T) {
t.Errorf("got misses=%d; want 0", v)
}
})
}
func TestLookupMetric(t *testing.T) {
d := []string{"a.io", "b.io", "c.io", "d.io", "e.io", "e.io", "e.io", "a.io"}
resetMetrics()
for _, q := range d {
_ = getBootstrapDNS(t, q)
}
// {"a.io": true, "b.io": true, "c.io": true, "d.io": true, "e.io": true}
if bootstrapLookupMap.Len() != 5 {
t.Errorf("bootstrapLookupMap.Len() want=5, got %v", bootstrapLookupMap.Len())
}
}

View File

@@ -13,6 +13,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
github.com/beorn7/perks/quantile from github.com/prometheus/client_golang/prometheus
💣 github.com/cespare/xxhash/v2 from github.com/prometheus/client_golang/prometheus
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
W 💣 github.com/dblohm7/wingoes from tailscale.com/util/winutil
github.com/fxamacker/cbor/v2 from tailscale.com/tka
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
github.com/golang/protobuf/proto from github.com/matttproud/golang_protobuf_extensions/pbutil+
@@ -168,9 +169,6 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
golang.org/x/crypto/nacl/box from tailscale.com/types/key
golang.org/x/crypto/nacl/secretbox from golang.org/x/crypto/nacl/box
golang.org/x/crypto/salsa20/salsa from golang.org/x/crypto/nacl/box+
golang.org/x/exp/constraints from golang.org/x/exp/slices
golang.org/x/exp/maps from tailscale.com/types/views
golang.org/x/exp/slices from tailscale.com/net/tsaddr+
L golang.org/x/net/bpf from github.com/mdlayher/netlink+
golang.org/x/net/dns/dnsmessage from net+
golang.org/x/net/http/httpguts from net/http
@@ -193,6 +191,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
golang.org/x/time/rate from tailscale.com/cmd/derper+
bufio from compress/flate+
bytes from bufio+
cmp from slices
compress/flate from compress/gzip+
compress/gzip from internal/profile+
container/list from crypto/tls+
@@ -242,6 +241,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
io/ioutil from github.com/mitchellh/go-ps+
log from expvar+
log/internal from log
maps from tailscale.com/types/views+
math from compress/flate+
math/big from crypto/dsa+
math/bits from compress/flate+
@@ -269,6 +269,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
runtime/metrics from github.com/prometheus/client_golang/prometheus+
runtime/pprof from net/http/pprof
runtime/trace from net/http/pprof
slices from tailscale.com/ipn+
sort from compress/flate+
strconv from compress/flate+
strings from bufio+

View File

@@ -7,10 +7,6 @@ package main
import (
"context"
"crypto/tls"
_ "embed"
"fmt"
"net/http"
"os"
"strings"
"time"
@@ -18,15 +14,11 @@ import (
"github.com/go-logr/zapr"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"golang.org/x/exp/slices"
"golang.org/x/oauth2/clientcredentials"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/transport"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -37,15 +29,12 @@ import (
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml"
"tailscale.com/client/tailscale"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/ipn/store/kubestore"
"tailscale.com/tsnet"
"tailscale.com/types/logger"
"tailscale.com/types/opt"
"tailscale.com/util/dnsname"
"tailscale.com/version"
)
@@ -55,13 +44,8 @@ func main() {
tailscale.I_Acknowledge_This_API_Is_Unstable = true
var (
hostname = defaultEnv("OPERATOR_HOSTNAME", "tailscale-operator")
kubeSecret = defaultEnv("OPERATOR_SECRET", "")
operatorTags = defaultEnv("OPERATOR_INITIAL_TAGS", "tag:k8s-operator")
tsNamespace = defaultEnv("OPERATOR_NAMESPACE", "")
tslogging = defaultEnv("OPERATOR_LOGGING", "info")
clientIDPath = defaultEnv("CLIENT_ID_FILE", "")
clientSecretPath = defaultEnv("CLIENT_SECRET_FILE", "")
image = defaultEnv("PROXY_IMAGE", "tailscale/tailscale:latest")
priorityClassName = defaultEnv("PROXY_PRIORITY_CLASS_NAME", "")
tags = defaultEnv("PROXY_TAGS", "tag:k8s")
@@ -79,8 +63,29 @@ func main() {
}
zlog := kzap.NewRaw(opts...).Sugar()
logf.SetLogger(zapr.NewLogger(zlog.Desugar()))
startlog := zlog.Named("startup")
s, tsClient := initTSNet(zlog)
defer s.Close()
restConfig := config.GetConfigOrDie()
if shouldRunAuthProxy {
launchAuthProxy(zlog, restConfig, s)
}
startReconcilers(zlog, tsNamespace, restConfig, tsClient, image, priorityClassName, tags)
}
// initTSNet initializes the tsnet.Server and logs in to Tailscale. It uses the
// CLIENT_ID_FILE and CLIENT_SECRET_FILE environment variables to authenticate
// with Tailscale.
func initTSNet(zlog *zap.SugaredLogger) (*tsnet.Server, *tailscale.Client) {
hostinfo.SetApp("k8s-operator")
var (
clientIDPath = defaultEnv("CLIENT_ID_FILE", "")
clientSecretPath = defaultEnv("CLIENT_SECRET_FILE", "")
hostname = defaultEnv("OPERATOR_HOSTNAME", "tailscale-operator")
kubeSecret = defaultEnv("OPERATOR_SECRET", "")
operatorTags = defaultEnv("OPERATOR_INITIAL_TAGS", "tag:k8s-operator")
)
startlog := zlog.Named("startup")
if clientIDPath == "" || clientSecretPath == "" {
startlog.Fatalf("CLIENT_ID_FILE and CLIENT_SECRET_FILE must be set")
}
@@ -100,12 +105,6 @@ func main() {
tsClient := tailscale.NewClient("-", nil)
tsClient.HTTPClient = credentials.Client(context.Background())
if shouldRunAuthProxy {
hostinfo.SetApp("k8s-operator-proxy")
} else {
hostinfo.SetApp("k8s-operator")
}
s := &tsnet.Server{
Hostname: hostname,
Logf: zlog.Named("tailscaled").Debugf,
@@ -120,7 +119,6 @@ func main() {
if err := s.Start(); err != nil {
startlog.Fatalf("starting tailscale server: %v", err)
}
defer s.Close()
lc, err := s.LocalClient()
if err != nil {
startlog.Fatalf("getting local client: %v", err)
@@ -176,7 +174,13 @@ waitOnline:
}
time.Sleep(time.Second)
}
return s, tsClient
}
// startReconcilers starts the controller-runtime manager and registers the
// ServiceReconciler.
func startReconcilers(zlog *zap.SugaredLogger, tsNamespace string, restConfig *rest.Config, tsClient *tailscale.Client, image, priorityClassName, tags string) {
startlog := zlog.Named("startReconcilers")
// For secrets and statefulsets, we only get permission to touch the objects
// in the controller's own namespace. This cannot be expressed by
// .Watches(...) below, instead you have to add a per-type field selector to
@@ -186,7 +190,6 @@ waitOnline:
nsFilter := cache.ByObject{
Field: client.InNamespace(tsNamespace).AsSelector(),
}
restConfig := config.GetConfigOrDie()
mgr, err := manager.New(restConfig, manager.Options{
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
@@ -199,16 +202,6 @@ waitOnline:
startlog.Fatalf("could not create manager: %v", err)
}
sr := &ServiceReconciler{
Client: mgr.GetClient(),
tsClient: tsClient,
defaultTags: strings.Split(tags, ","),
operatorNamespace: tsNamespace,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
logger: zlog.Named("service-reconciler"),
}
reconcileFilter := handler.EnqueueRequestsFromMapFunc(func(_ context.Context, o client.Object) []reconcile.Request {
ls := o.GetLabels()
if ls[LabelManaged] != "true" {
@@ -231,522 +224,29 @@ waitOnline:
For(&corev1.Service{}).
Watches(&appsv1.StatefulSet{}, reconcileFilter).
Watches(&corev1.Secret{}, reconcileFilter).
Complete(sr)
Complete(&ServiceReconciler{
ssr: &tailscaleSTSReconciler{
Client: mgr.GetClient(),
tsClient: tsClient,
defaultTags: strings.Split(tags, ","),
operatorNamespace: tsNamespace,
proxyImage: image,
proxyPriorityClassName: priorityClassName,
},
Client: mgr.GetClient(),
logger: zlog.Named("service-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create controller: %v", err)
}
startlog.Infof("Startup complete, operator running, version: %s", version.Long())
if shouldRunAuthProxy {
cfg, err := restConfig.TransportConfig()
if err != nil {
startlog.Fatalf("could not get rest.TransportConfig(): %v", err)
}
// Kubernetes uses SPDY for exec and port-forward, however SPDY is
// incompatible with HTTP/2; so disable HTTP/2 in the proxy.
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.TLSClientConfig, err = transport.TLSConfigFor(cfg)
if err != nil {
startlog.Fatalf("could not get transport.TLSConfigFor(): %v", err)
}
tr.TLSNextProto = make(map[string]func(authority string, c *tls.Conn) http.RoundTripper)
rt, err := transport.HTTPWrappersForConfig(cfg, tr)
if err != nil {
startlog.Fatalf("could not get rest.TransportConfig(): %v", err)
}
go runAuthProxy(s, rt, zlog.Named("auth-proxy").Infof)
}
if err := mgr.Start(signals.SetupSignalHandler()); err != nil {
startlog.Fatalf("could not start manager: %v", err)
}
}
const (
LabelManaged = "tailscale.com/managed"
LabelParentType = "tailscale.com/parent-resource-type"
LabelParentName = "tailscale.com/parent-resource"
LabelParentNamespace = "tailscale.com/parent-resource-ns"
FinalizerName = "tailscale.com/finalizer"
AnnotationExpose = "tailscale.com/expose"
AnnotationTags = "tailscale.com/tags"
AnnotationHostname = "tailscale.com/hostname"
)
// ServiceReconciler is a simple ControllerManagedBy example implementation.
type ServiceReconciler struct {
client.Client
tsClient tsClient
defaultTags []string
operatorNamespace string
proxyImage string
proxyPriorityClassName string
logger *zap.SugaredLogger
}
type tsClient interface {
CreateKey(ctx context.Context, caps tailscale.KeyCapabilities) (string, *tailscale.Key, error)
DeleteDevice(ctx context.Context, id string) error
}
func childResourceLabels(parent *corev1.Service) map[string]string {
// You might wonder why we're using owner references, since they seem to be
// built for exactly this. Unfortunately, Kubernetes does not support
// cross-namespace ownership, by design. This means we cannot make the
// service being exposed the owner of the implementation details of the
// proxying. Instead, we have to do our own filtering and tracking with
// labels.
return map[string]string{
LabelManaged: "true",
LabelParentName: parent.GetName(),
LabelParentNamespace: parent.GetNamespace(),
LabelParentType: "svc",
}
}
func (a *ServiceReconciler) Reconcile(ctx context.Context, req reconcile.Request) (_ reconcile.Result, err error) {
logger := a.logger.With("service-ns", req.Namespace, "service-name", req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
svc := new(corev1.Service)
err = a.Get(ctx, req.NamespacedName, svc)
if apierrors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
logger.Debugf("service not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get svc: %w", err)
}
if !svc.DeletionTimestamp.IsZero() || !a.shouldExpose(svc) {
logger.Debugf("service is being deleted or should not be exposed, cleaning up")
return reconcile.Result{}, a.maybeCleanup(ctx, logger, svc)
}
return reconcile.Result{}, a.maybeProvision(ctx, logger, svc)
}
// maybeCleanup removes any existing resources related to serving svc over tailscale.
//
// This function is responsible for removing the finalizer from the service,
// once all associated resources are gone.
func (a *ServiceReconciler) maybeCleanup(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) error {
ix := slices.Index(svc.Finalizers, FinalizerName)
if ix < 0 {
logger.Debugf("no finalizer, nothing to do")
return nil
}
ml := childResourceLabels(svc)
// Need to delete the StatefulSet first, and delete it with foreground
// cascading deletion. That way, the pod that's writing to the Secret will
// stop running before we start looking at the Secret's contents, and
// assuming k8s ordering semantics don't mess with us, that should avoid
// tailscale device deletion races where we fail to notice a device that
// should be removed.
sts, err := getSingleObject[appsv1.StatefulSet](ctx, a.Client, a.operatorNamespace, ml)
if err != nil {
return fmt.Errorf("getting statefulset: %w", err)
}
if sts != nil {
if !sts.GetDeletionTimestamp().IsZero() {
// Deletion in progress, check again later. We'll get another
// notification when the deletion is complete.
logger.Debugf("waiting for statefulset %s/%s deletion", sts.GetNamespace(), sts.GetName())
return nil
}
err := a.DeleteAllOf(ctx, &appsv1.StatefulSet{}, client.InNamespace(a.operatorNamespace), client.MatchingLabels(ml), client.PropagationPolicy(metav1.DeletePropagationForeground))
if err != nil {
return fmt.Errorf("deleting statefulset: %w", err)
}
logger.Debugf("started deletion of statefulset %s/%s", sts.GetNamespace(), sts.GetName())
return nil
}
id, _, err := a.getDeviceInfo(ctx, svc)
if err != nil {
return fmt.Errorf("getting device info: %w", err)
}
if id != "" {
// TODO: handle case where the device is already deleted, but the secret
// is still around.
if err := a.tsClient.DeleteDevice(ctx, id); err != nil {
return fmt.Errorf("deleting device: %w", err)
}
}
types := []client.Object{
&corev1.Service{},
&corev1.Secret{},
}
for _, typ := range types {
if err := a.DeleteAllOf(ctx, typ, client.InNamespace(a.operatorNamespace), client.MatchingLabels(ml)); err != nil {
return err
}
}
svc.Finalizers = append(svc.Finalizers[:ix], svc.Finalizers[ix+1:]...)
if err := a.Update(ctx, svc); err != nil {
return fmt.Errorf("failed to remove finalizer: %w", err)
}
// Unlike most log entries in the reconcile loop, this will get printed
// exactly once at the very end of cleanup, because the final step of
// cleanup removes the tailscale finalizer, which will make all future
// reconciles exit early.
logger.Infof("unexposed service from tailnet")
return nil
}
// maybeProvision ensures that svc is exposed over tailscale, taking any actions
// necessary to reach that state.
//
// This function adds a finalizer to svc, ensuring that we can handle orderly
// deprovisioning later.
func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) error {
hostname, err := nameForService(svc)
if err != nil {
return err
}
if !slices.Contains(svc.Finalizers, FinalizerName) {
// This log line is printed exactly once during initial provisioning,
// because once the finalizer is in place this block gets skipped. So,
// this is a nice place to tell the operator that the high level,
// multi-reconcile operation is underway.
logger.Infof("exposing service over tailscale")
svc.Finalizers = append(svc.Finalizers, FinalizerName)
if err := a.Update(ctx, svc); err != nil {
return fmt.Errorf("failed to add finalizer: %w", err)
}
}
// Do full reconcile.
hsvc, err := a.reconcileHeadlessService(ctx, logger, svc)
if err != nil {
return fmt.Errorf("failed to reconcile headless service: %w", err)
}
tags := a.defaultTags
if tstr, ok := svc.Annotations[AnnotationTags]; ok {
tags = strings.Split(tstr, ",")
}
secretName, err := a.createOrGetSecret(ctx, logger, svc, hsvc, tags)
if err != nil {
return fmt.Errorf("failed to create or get API key secret: %w", err)
}
_, err = a.reconcileSTS(ctx, logger, svc, hsvc, secretName, hostname)
if err != nil {
return fmt.Errorf("failed to reconcile statefulset: %w", err)
}
if !a.hasLoadBalancerClass(svc) {
logger.Debugf("service is not a LoadBalancer, so not updating ingress")
return nil
}
_, tsHost, err := a.getDeviceInfo(ctx, svc)
if err != nil {
return fmt.Errorf("failed to get device ID: %w", err)
}
if tsHost == "" {
logger.Debugf("no Tailscale hostname known yet, waiting for proxy pod to finish auth")
// No hostname yet. Wait for the proxy pod to auth.
svc.Status.LoadBalancer.Ingress = nil
if err := a.Status().Update(ctx, svc); err != nil {
return fmt.Errorf("failed to update service status: %w", err)
}
return nil
}
logger.Debugf("setting ingress hostname to %q", tsHost)
svc.Status.LoadBalancer.Ingress = []corev1.LoadBalancerIngress{
{
Hostname: tsHost,
},
}
if err := a.Status().Update(ctx, svc); err != nil {
return fmt.Errorf("failed to update service status: %w", err)
}
return nil
}
func (a *ServiceReconciler) shouldExpose(svc *corev1.Service) bool {
// Headless services can't be exposed, since there is no ClusterIP to
// forward to.
if svc.Spec.ClusterIP == "" || svc.Spec.ClusterIP == "None" {
return false
}
return a.hasLoadBalancerClass(svc) || a.hasAnnotation(svc)
}
func (a *ServiceReconciler) hasLoadBalancerClass(svc *corev1.Service) bool {
return svc != nil &&
svc.Spec.Type == corev1.ServiceTypeLoadBalancer &&
svc.Spec.LoadBalancerClass != nil &&
*svc.Spec.LoadBalancerClass == "tailscale"
}
func (a *ServiceReconciler) hasAnnotation(svc *corev1.Service) bool {
return svc != nil &&
svc.Annotations[AnnotationExpose] == "true"
}
func (a *ServiceReconciler) reconcileHeadlessService(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) (*corev1.Service, error) {
hsvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "ts-" + svc.Name + "-",
Namespace: a.operatorNamespace,
Labels: childResourceLabels(svc),
},
Spec: corev1.ServiceSpec{
ClusterIP: "None",
Selector: map[string]string{
"app": string(svc.UID),
},
},
}
logger.Debugf("reconciling headless service for StatefulSet")
return createOrUpdate(ctx, a.Client, a.operatorNamespace, hsvc, func(svc *corev1.Service) { svc.Spec = hsvc.Spec })
}
func (a *ServiceReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, svc, hsvc *corev1.Service, tags []string) (string, error) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
// Hardcode a -0 suffix so that in future, if we support
// multiple StatefulSet replicas, we can provision -N for
// those.
Name: hsvc.Name + "-0",
Namespace: a.operatorNamespace,
Labels: childResourceLabels(svc),
},
}
if err := a.Get(ctx, client.ObjectKeyFromObject(secret), secret); err == nil {
logger.Debugf("secret %s/%s already exists", secret.GetNamespace(), secret.GetName())
return secret.Name, nil
} else if !apierrors.IsNotFound(err) {
return "", err
}
// Secret doesn't exist yet, create one. Initially it contains
// only the Tailscale authkey, but once Tailscale starts it'll
// also store the daemon state.
sts, err := getSingleObject[appsv1.StatefulSet](ctx, a.Client, a.operatorNamespace, childResourceLabels(svc))
if err != nil {
return "", err
}
if sts != nil {
// StatefulSet exists, so we have already created the secret.
// If the secret is missing, they should delete the StatefulSet.
logger.Errorf("Tailscale proxy secret doesn't exist, but the corresponding StatefulSet %s/%s already does. Something is wrong, please delete the StatefulSet.", sts.GetNamespace(), sts.GetName())
return "", nil
}
// Create API Key secret which is going to be used by the statefulset
// to authenticate with Tailscale.
logger.Debugf("creating authkey for new tailscale proxy")
authKey, err := a.newAuthKey(ctx, tags)
if err != nil {
return "", err
}
secret.StringData = map[string]string{
"authkey": authKey,
}
if err := a.Create(ctx, secret); err != nil {
return "", err
}
return secret.Name, nil
}
func (a *ServiceReconciler) getDeviceInfo(ctx context.Context, svc *corev1.Service) (id, hostname string, err error) {
sec, err := getSingleObject[corev1.Secret](ctx, a.Client, a.operatorNamespace, childResourceLabels(svc))
if err != nil {
return "", "", err
}
if sec == nil {
return "", "", nil
}
id = string(sec.Data["device_id"])
if id == "" {
return "", "", nil
}
// Kubernetes chokes on well-formed FQDNs with the trailing dot, so we have
// to remove it.
hostname = strings.TrimSuffix(string(sec.Data["device_fqdn"]), ".")
if hostname == "" {
return "", "", nil
}
return id, hostname, nil
}
func (a *ServiceReconciler) newAuthKey(ctx context.Context, tags []string) (string, error) {
caps := tailscale.KeyCapabilities{
Devices: tailscale.KeyDeviceCapabilities{
Create: tailscale.KeyDeviceCreateCapabilities{
Reusable: false,
Preauthorized: true,
Tags: tags,
},
},
}
key, _, err := a.tsClient.CreateKey(ctx, caps)
if err != nil {
return "", err
}
return key, nil
}
//go:embed manifests/proxy.yaml
var proxyYaml []byte
func (a *ServiceReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, parentSvc, headlessSvc *corev1.Service, authKeySecret, hostname string) (*appsv1.StatefulSet, error) {
var ss appsv1.StatefulSet
if err := yaml.Unmarshal(proxyYaml, &ss); err != nil {
return nil, fmt.Errorf("failed to unmarshal proxy spec: %w", err)
}
container := &ss.Spec.Template.Spec.Containers[0]
container.Image = a.proxyImage
container.Env = append(container.Env,
corev1.EnvVar{
Name: "TS_DEST_IP",
Value: parentSvc.Spec.ClusterIP,
},
corev1.EnvVar{
Name: "TS_KUBE_SECRET",
Value: authKeySecret,
},
corev1.EnvVar{
Name: "TS_HOSTNAME",
Value: hostname,
})
ss.ObjectMeta = metav1.ObjectMeta{
Name: headlessSvc.Name,
Namespace: a.operatorNamespace,
Labels: childResourceLabels(parentSvc),
}
ss.Spec.ServiceName = headlessSvc.Name
ss.Spec.Selector = &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": string(parentSvc.UID),
},
}
ss.Spec.Template.ObjectMeta.Labels = map[string]string{
"app": string(parentSvc.UID),
}
ss.Spec.Template.Spec.PriorityClassName = a.proxyPriorityClassName
logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName())
return createOrUpdate(ctx, a.Client, a.operatorNamespace, &ss, func(s *appsv1.StatefulSet) { s.Spec = ss.Spec })
}
// ptrObject is a type constraint for pointer types that implement
// client.Object.
type ptrObject[T any] interface {
client.Object
*T
}
// createOrUpdate adds obj to the k8s cluster, unless the object already exists,
// in which case update is called to make changes to it. If update is nil, the
// existing object is returned unmodified.
//
// obj is looked up by its Name and Namespace if Name is set, otherwise it's
// looked up by labels.
func createOrUpdate[T any, O ptrObject[T]](ctx context.Context, c client.Client, ns string, obj O, update func(O)) (O, error) {
var (
existing O
err error
)
if obj.GetName() != "" {
existing = new(T)
existing.SetName(obj.GetName())
existing.SetNamespace(obj.GetNamespace())
err = c.Get(ctx, client.ObjectKeyFromObject(obj), existing)
} else {
existing, err = getSingleObject[T, O](ctx, c, ns, obj.GetLabels())
}
if err == nil && existing != nil {
if update != nil {
update(existing)
if err := c.Update(ctx, existing); err != nil {
return nil, err
}
}
return existing, nil
}
if err != nil && !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("failed to get object: %w", err)
}
if err := c.Create(ctx, obj); err != nil {
return nil, err
}
return obj, nil
}
// getSingleObject searches for k8s objects of type T
// (e.g. corev1.Service) with the given labels, and returns
// it. Returns nil if no objects match the labels, and an error if
// more than one object matches.
func getSingleObject[T any, O ptrObject[T]](ctx context.Context, c client.Client, ns string, labels map[string]string) (O, error) {
ret := O(new(T))
kinds, _, err := c.Scheme().ObjectKinds(ret)
if err != nil {
return nil, err
}
if len(kinds) != 1 {
// TODO: the runtime package apparently has a "pick the best
// GVK" function somewhere that might be good enough?
return nil, fmt.Errorf("more than 1 GroupVersionKind for %T", ret)
}
gvk := kinds[0]
gvk.Kind += "List"
lst := unstructured.UnstructuredList{}
lst.SetGroupVersionKind(gvk)
if err := c.List(ctx, &lst, client.InNamespace(ns), client.MatchingLabels(labels)); err != nil {
return nil, err
}
if len(lst.Items) == 0 {
return nil, nil
}
if len(lst.Items) > 1 {
return nil, fmt.Errorf("found multiple matching %T objects", ret)
}
if err := c.Scheme().Convert(&lst.Items[0], ret, nil); err != nil {
return nil, err
}
return ret, nil
}
func defaultBool(envName string, defVal bool) bool {
vs := os.Getenv(envName)
if vs == "" {
return defVal
}
v, _ := opt.Bool(vs).Get()
return v
}
func defaultEnv(envName, defVal string) string {
v := os.Getenv(envName)
if v == "" {
return defVal
}
return v
}
func nameForService(svc *corev1.Service) (string, error) {
if h, ok := svc.Annotations[AnnotationHostname]; ok {
if err := dnsname.ValidLabel(h); err != nil {
return "", fmt.Errorf("invalid Tailscale hostname %q: %w", h, err)
}
return h, nil
}
return svc.Namespace + "-" + svc.Name, nil
DeleteDevice(ctx context.Context, nodeStableID string) error
}

View File

@@ -32,12 +32,15 @@ func TestLoadBalancerClass(t *testing.T) {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
logger: zl.Sugar(),
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// Create a service that we should manage, and check that the initial round
@@ -153,12 +156,15 @@ func TestAnnotations(t *testing.T) {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
logger: zl.Sugar(),
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// Create a service that we should manage, and check that the initial round
@@ -250,12 +256,15 @@ func TestAnnotationIntoLB(t *testing.T) {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
logger: zl.Sugar(),
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// Create a service that we should manage, and check that the initial round
@@ -368,12 +377,15 @@ func TestLBIntoAnnotation(t *testing.T) {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
logger: zl.Sugar(),
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// Create a service that we should manage, and check that the initial round
@@ -491,12 +503,15 @@ func TestCustomHostname(t *testing.T) {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
logger: zl.Sugar(),
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// Create a service that we should manage, and check that the initial round
@@ -593,13 +608,16 @@ func TestCustomPriorityClassName(t *testing.T) {
t.Fatal(err)
}
sr := &ServiceReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
proxyPriorityClassName: "tailscale-critical",
logger: zl.Sugar(),
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
proxyPriorityClassName: "tailscale-critical",
},
logger: zl.Sugar(),
}
// Create a service that we should manage, and check that the initial round

View File

@@ -14,14 +14,59 @@ import (
"os"
"strings"
"go.uber.org/zap"
"k8s.io/client-go/rest"
"k8s.io/client-go/transport"
"tailscale.com/client/tailscale"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/hostinfo"
"tailscale.com/tailcfg"
"tailscale.com/tsnet"
"tailscale.com/types/logger"
"tailscale.com/util/set"
)
type whoIsKey struct{}
// whoIsFromRequest returns the WhoIsResponse previously stashed by a call to
// addWhoIsToRequest.
func whoIsFromRequest(r *http.Request) *apitype.WhoIsResponse {
return r.Context().Value(whoIsKey{}).(*apitype.WhoIsResponse)
}
// addWhoIsToRequest stashes who in r's context, retrievable by a call to
// whoIsFromRequest.
func addWhoIsToRequest(r *http.Request, who *apitype.WhoIsResponse) *http.Request {
return r.WithContext(context.WithValue(r.Context(), whoIsKey{}, who))
}
// launchAuthProxy launches the auth proxy, which is a small HTTP server that
// authenticates requests using the Tailscale LocalAPI and then proxies them to
// the kube-apiserver.
func launchAuthProxy(zlog *zap.SugaredLogger, restConfig *rest.Config, s *tsnet.Server) {
hostinfo.SetApp("k8s-operator-proxy")
startlog := zlog.Named("launchAuthProxy")
cfg, err := restConfig.TransportConfig()
if err != nil {
startlog.Fatalf("could not get rest.TransportConfig(): %v", err)
}
// Kubernetes uses SPDY for exec and port-forward, however SPDY is
// incompatible with HTTP/2; so disable HTTP/2 in the proxy.
tr := http.DefaultTransport.(*http.Transport).Clone()
tr.TLSClientConfig, err = transport.TLSConfigFor(cfg)
if err != nil {
startlog.Fatalf("could not get transport.TLSConfigFor(): %v", err)
}
tr.TLSNextProto = make(map[string]func(authority string, c *tls.Conn) http.RoundTripper)
rt, err := transport.HTTPWrappersForConfig(cfg, tr)
if err != nil {
startlog.Fatalf("could not get rest.TransportConfig(): %v", err)
}
go runAuthProxy(s, rt, zlog.Named("auth-proxy").Infof)
}
// authProxy is an http.Handler that authenticates requests using the Tailscale
// LocalAPI and then proxies them to the Kubernetes API.
type authProxy struct {
@@ -37,8 +82,7 @@ func (h *authProxy) ServeHTTP(w http.ResponseWriter, r *http.Request) {
http.Error(w, "failed to authenticate caller", http.StatusInternalServerError)
return
}
r = r.WithContext(context.WithValue(r.Context(), whoIsKey{}, who))
h.rp.ServeHTTP(w, r)
h.rp.ServeHTTP(w, addWhoIsToRequest(r, who))
}
// runAuthProxy runs an HTTP server that authenticates requests using the
@@ -67,6 +111,10 @@ func runAuthProxy(s *tsnet.Server, rt http.RoundTripper, logf logger.Logf) {
lc: lc,
rp: &httputil.ReverseProxy{
Director: func(r *http.Request) {
// Replace the URL with the Kubernetes APIServer.
r.URL.Scheme = u.Scheme
r.URL.Host = u.Host
// We want to proxy to the Kubernetes API, but we want to use
// the caller's identity to do so. We do this by impersonating
// the caller using the Kubernetes User Impersonation feature:
@@ -85,21 +133,9 @@ func runAuthProxy(s *tsnet.Server, rt http.RoundTripper, logf logger.Logf) {
}
// Now add the impersonation headers that we want.
who := r.Context().Value(whoIsKey{}).(*apitype.WhoIsResponse)
if who.Node.IsTagged() {
// Use the nodes FQDN as the username, and the nodes tags as the groups.
// "Impersonate-Group" requires "Impersonate-User" to be set.
r.Header.Set("Impersonate-User", strings.TrimSuffix(who.Node.Name, "."))
for _, tag := range who.Node.Tags {
r.Header.Add("Impersonate-Group", tag)
}
} else {
r.Header.Set("Impersonate-User", who.UserProfile.LoginName)
if err := addImpersonationHeaders(r); err != nil {
panic("failed to add impersonation headers: " + err.Error())
}
// Replace the URL with the Kubernetes APIServer.
r.URL.Scheme = u.Scheme
r.URL.Host = u.Host
},
Transport: rt,
},
@@ -118,3 +154,58 @@ func runAuthProxy(s *tsnet.Server, rt http.RoundTripper, logf logger.Logf) {
log.Fatalf("runAuthProxy: failed to serve %v", err)
}
}
const capabilityName = "https://tailscale.com/cap/kubernetes"
type capRule struct {
// Impersonate is a list of rules that specify how to impersonate the caller
// when proxying to the Kubernetes API.
Impersonate *impersonateRule `json:"impersonate,omitempty"`
}
// TODO(maisem): move this to some well-known location so that it can be shared
// with control.
type impersonateRule struct {
Groups []string `json:"groups,omitempty"`
}
// addImpersonationHeaders adds the appropriate headers to r to impersonate the
// caller when proxying to the Kubernetes API. It uses the WhoIsResponse stashed
// in the context by the authProxy.
func addImpersonationHeaders(r *http.Request) error {
who := whoIsFromRequest(r)
rules, err := tailcfg.UnmarshalCapJSON[capRule](who.CapMap, capabilityName)
if err != nil {
return fmt.Errorf("failed to unmarshal capability: %v", err)
}
var groupsAdded set.Slice[string]
for _, rule := range rules {
if rule.Impersonate == nil {
continue
}
for _, group := range rule.Impersonate.Groups {
if groupsAdded.Contains(group) {
continue
}
r.Header.Add("Impersonate-Group", group)
groupsAdded.Add(group)
}
}
if !who.Node.IsTagged() {
r.Header.Set("Impersonate-User", who.UserProfile.LoginName)
return nil
}
// "Impersonate-Group" requires "Impersonate-User" to be set, so we set it
// to the node FQDN for tagged nodes.
r.Header.Set("Impersonate-User", strings.TrimSuffix(who.Node.Name, "."))
// For legacy behavior (before caps), set the groups to the nodes tags.
if groupsAdded.Slice().Len() == 0 {
for _, tag := range who.Node.Tags {
r.Header.Add("Impersonate-Group", tag)
}
}
return nil
}

View File

@@ -0,0 +1,107 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"net/http"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/tailcfg"
"tailscale.com/util/must"
)
func TestImpersonationHeaders(t *testing.T) {
tests := []struct {
name string
emailish string
tags []string
capMap tailcfg.PeerCapMap
wantHeaders http.Header
}{
{
name: "user",
emailish: "foo@example.com",
wantHeaders: http.Header{
"Impersonate-User": {"foo@example.com"},
},
},
{
name: "tagged",
emailish: "tagged-device",
tags: []string{"tag:foo", "tag:bar"},
wantHeaders: http.Header{
"Impersonate-User": {"node.ts.net"},
"Impersonate-Group": {"tag:foo", "tag:bar"},
},
},
{
name: "user-with-cap",
emailish: "foo@example.com",
capMap: tailcfg.PeerCapMap{
capabilityName: {
[]byte(`{"impersonate":{"groups":["group1","group2"]}}`),
[]byte(`{"impersonate":{"groups":["group1","group3"]}}`), // One group is duplicated.
[]byte(`{"impersonate":{"groups":["group4"]}}`),
[]byte(`{"impersonate":{"groups":["group2"]}}`), // duplicate
// These should be ignored, but should parse correctly.
[]byte(`{}`),
[]byte(`{"impersonate":{}}`),
[]byte(`{"impersonate":{"groups":[]}}`),
},
},
wantHeaders: http.Header{
"Impersonate-Group": {"group1", "group2", "group3", "group4"},
"Impersonate-User": {"foo@example.com"},
},
},
{
name: "tagged-with-cap",
emailish: "tagged-device",
tags: []string{"tag:foo", "tag:bar"},
capMap: tailcfg.PeerCapMap{
capabilityName: {
[]byte(`{"impersonate":{"groups":["group1"]}}`),
},
},
wantHeaders: http.Header{
"Impersonate-Group": {"group1"},
"Impersonate-User": {"node.ts.net"},
},
},
{
name: "bad-cap",
emailish: "tagged-device",
tags: []string{"tag:foo", "tag:bar"},
capMap: tailcfg.PeerCapMap{
capabilityName: {
[]byte(`[]`),
},
},
wantHeaders: http.Header{},
},
}
for _, tc := range tests {
r := must.Get(http.NewRequest("GET", "https://op.ts.net/api/foo", nil))
r = addWhoIsToRequest(r, &apitype.WhoIsResponse{
Node: &tailcfg.Node{
Name: "node.ts.net",
Tags: tc.tags,
},
UserProfile: &tailcfg.UserProfile{
LoginName: tc.emailish,
},
CapMap: tc.capMap,
})
addImpersonationHeaders(r)
if d := cmp.Diff(tc.wantHeaders, r.Header); d != "" {
t.Errorf("unexpected header (-want +got):\n%s", d)
}
}
}

392
cmd/k8s-operator/sts.go Normal file
View File

@@ -0,0 +1,392 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"context"
_ "embed"
"fmt"
"os"
"strings"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/yaml"
"tailscale.com/client/tailscale"
"tailscale.com/tailcfg"
"tailscale.com/types/opt"
"tailscale.com/util/dnsname"
)
const (
LabelManaged = "tailscale.com/managed"
LabelParentType = "tailscale.com/parent-resource-type"
LabelParentName = "tailscale.com/parent-resource"
LabelParentNamespace = "tailscale.com/parent-resource-ns"
FinalizerName = "tailscale.com/finalizer"
AnnotationExpose = "tailscale.com/expose"
AnnotationTags = "tailscale.com/tags"
AnnotationHostname = "tailscale.com/hostname"
)
type tailscaleSTSConfig struct {
ParentResourceName string
ParentResourceUID string
ChildResourceLabels map[string]string
TargetIP string
Hostname string
Tags []string // if empty, use defaultTags
}
type tailscaleSTSReconciler struct {
client.Client
tsClient tsClient
defaultTags []string
operatorNamespace string
proxyImage string
proxyPriorityClassName string
}
// Provision ensures that the StatefulSet for the given service is running and
// up to date.
func (a *tailscaleSTSReconciler) Provision(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig) error {
// Do full reconcile.
hsvc, err := a.reconcileHeadlessService(ctx, logger, sts)
if err != nil {
return fmt.Errorf("failed to reconcile headless service: %w", err)
}
secretName, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
if err != nil {
return fmt.Errorf("failed to create or get API key secret: %w", err)
}
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName)
if err != nil {
return fmt.Errorf("failed to reconcile statefulset: %w", err)
}
return nil
}
// Cleanup removes all resources associated that were created by Provision with
// the given labels. It returns true when all resources have been removed,
// otherwise it returns false and the caller should retry later.
func (a *tailscaleSTSReconciler) Cleanup(ctx context.Context, logger *zap.SugaredLogger, labels map[string]string) (done bool, _ error) {
// Need to delete the StatefulSet first, and delete it with foreground
// cascading deletion. That way, the pod that's writing to the Secret will
// stop running before we start looking at the Secret's contents, and
// assuming k8s ordering semantics don't mess with us, that should avoid
// tailscale device deletion races where we fail to notice a device that
// should be removed.
sts, err := getSingleObject[appsv1.StatefulSet](ctx, a.Client, a.operatorNamespace, labels)
if err != nil {
return false, fmt.Errorf("getting statefulset: %w", err)
}
if sts != nil {
if !sts.GetDeletionTimestamp().IsZero() {
// Deletion in progress, check again later. We'll get another
// notification when the deletion is complete.
logger.Debugf("waiting for statefulset %s/%s deletion", sts.GetNamespace(), sts.GetName())
return false, nil
}
err := a.DeleteAllOf(ctx, &appsv1.StatefulSet{}, client.InNamespace(a.operatorNamespace), client.MatchingLabels(labels), client.PropagationPolicy(metav1.DeletePropagationForeground))
if err != nil {
return false, fmt.Errorf("deleting statefulset: %w", err)
}
logger.Debugf("started deletion of statefulset %s/%s", sts.GetNamespace(), sts.GetName())
return false, nil
}
id, _, err := a.DeviceInfo(ctx, labels)
if err != nil {
return false, fmt.Errorf("getting device info: %w", err)
}
if id != "" {
// TODO: handle case where the device is already deleted, but the secret
// is still around.
if err := a.tsClient.DeleteDevice(ctx, string(id)); err != nil {
return false, fmt.Errorf("deleting device: %w", err)
}
}
types := []client.Object{
&corev1.Service{},
&corev1.Secret{},
}
for _, typ := range types {
if err := a.DeleteAllOf(ctx, typ, client.InNamespace(a.operatorNamespace), client.MatchingLabels(labels)); err != nil {
return false, err
}
}
return true, nil
}
func (a *tailscaleSTSReconciler) reconcileHeadlessService(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig) (*corev1.Service, error) {
hsvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "ts-" + sts.ParentResourceName + "-",
Namespace: a.operatorNamespace,
Labels: sts.ChildResourceLabels,
},
Spec: corev1.ServiceSpec{
ClusterIP: "None",
Selector: map[string]string{
"app": sts.ParentResourceUID,
},
},
}
logger.Debugf("reconciling headless service for StatefulSet")
return createOrUpdate(ctx, a.Client, a.operatorNamespace, hsvc, func(svc *corev1.Service) { svc.Spec = hsvc.Spec })
}
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (string, error) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
// Hardcode a -0 suffix so that in future, if we support
// multiple StatefulSet replicas, we can provision -N for
// those.
Name: hsvc.Name + "-0",
Namespace: a.operatorNamespace,
Labels: stsC.ChildResourceLabels,
},
}
if err := a.Get(ctx, client.ObjectKeyFromObject(secret), secret); err == nil {
logger.Debugf("secret %s/%s already exists", secret.GetNamespace(), secret.GetName())
return secret.Name, nil
} else if !apierrors.IsNotFound(err) {
return "", err
}
// Secret doesn't exist yet, create one. Initially it contains
// only the Tailscale authkey, but once Tailscale starts it'll
// also store the daemon state.
sts, err := getSingleObject[appsv1.StatefulSet](ctx, a.Client, a.operatorNamespace, stsC.ChildResourceLabels)
if err != nil {
return "", err
}
if sts != nil {
// StatefulSet exists, so we have already created the secret.
// If the secret is missing, they should delete the StatefulSet.
logger.Errorf("Tailscale proxy secret doesn't exist, but the corresponding StatefulSet %s/%s already does. Something is wrong, please delete the StatefulSet.", sts.GetNamespace(), sts.GetName())
return "", nil
}
// Create API Key secret which is going to be used by the statefulset
// to authenticate with Tailscale.
logger.Debugf("creating authkey for new tailscale proxy")
tags := stsC.Tags
if len(tags) == 0 {
tags = a.defaultTags
}
authKey, err := a.newAuthKey(ctx, tags)
if err != nil {
return "", err
}
secret.StringData = map[string]string{
"authkey": authKey,
}
if err := a.Create(ctx, secret); err != nil {
return "", err
}
return secret.Name, nil
}
// DeviceInfo returns the device ID and hostname for the Tailscale device
// associated with the given labels.
func (a *tailscaleSTSReconciler) DeviceInfo(ctx context.Context, childLabels map[string]string) (id tailcfg.StableNodeID, hostname string, err error) {
sec, err := getSingleObject[corev1.Secret](ctx, a.Client, a.operatorNamespace, childLabels)
if err != nil {
return "", "", err
}
if sec == nil {
return "", "", nil
}
id = tailcfg.StableNodeID(sec.Data["device_id"])
if id == "" {
return "", "", nil
}
// Kubernetes chokes on well-formed FQDNs with the trailing dot, so we have
// to remove it.
hostname = strings.TrimSuffix(string(sec.Data["device_fqdn"]), ".")
if hostname == "" {
return "", "", nil
}
return id, hostname, nil
}
func (a *tailscaleSTSReconciler) newAuthKey(ctx context.Context, tags []string) (string, error) {
caps := tailscale.KeyCapabilities{
Devices: tailscale.KeyDeviceCapabilities{
Create: tailscale.KeyDeviceCreateCapabilities{
Reusable: false,
Preauthorized: true,
Tags: tags,
},
},
}
key, _, err := a.tsClient.CreateKey(ctx, caps)
if err != nil {
return "", err
}
return key, nil
}
//go:embed manifests/proxy.yaml
var proxyYaml []byte
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, authKeySecret string) (*appsv1.StatefulSet, error) {
var ss appsv1.StatefulSet
if err := yaml.Unmarshal(proxyYaml, &ss); err != nil {
return nil, fmt.Errorf("failed to unmarshal proxy spec: %w", err)
}
container := &ss.Spec.Template.Spec.Containers[0]
container.Image = a.proxyImage
container.Env = append(container.Env,
corev1.EnvVar{
Name: "TS_DEST_IP",
Value: sts.TargetIP,
},
corev1.EnvVar{
Name: "TS_KUBE_SECRET",
Value: authKeySecret,
},
corev1.EnvVar{
Name: "TS_HOSTNAME",
Value: sts.Hostname,
})
ss.ObjectMeta = metav1.ObjectMeta{
Name: headlessSvc.Name,
Namespace: a.operatorNamespace,
Labels: sts.ChildResourceLabels,
}
ss.Spec.ServiceName = headlessSvc.Name
ss.Spec.Selector = &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": sts.ParentResourceUID,
},
}
ss.Spec.Template.ObjectMeta.Labels = map[string]string{
"app": sts.ParentResourceUID,
}
ss.Spec.Template.Spec.PriorityClassName = a.proxyPriorityClassName
logger.Debugf("reconciling statefulset %s/%s", ss.GetNamespace(), ss.GetName())
return createOrUpdate(ctx, a.Client, a.operatorNamespace, &ss, func(s *appsv1.StatefulSet) { s.Spec = ss.Spec })
}
// ptrObject is a type constraint for pointer types that implement
// client.Object.
type ptrObject[T any] interface {
client.Object
*T
}
// createOrUpdate adds obj to the k8s cluster, unless the object already exists,
// in which case update is called to make changes to it. If update is nil, the
// existing object is returned unmodified.
//
// obj is looked up by its Name and Namespace if Name is set, otherwise it's
// looked up by labels.
func createOrUpdate[T any, O ptrObject[T]](ctx context.Context, c client.Client, ns string, obj O, update func(O)) (O, error) {
var (
existing O
err error
)
if obj.GetName() != "" {
existing = new(T)
existing.SetName(obj.GetName())
existing.SetNamespace(obj.GetNamespace())
err = c.Get(ctx, client.ObjectKeyFromObject(obj), existing)
} else {
existing, err = getSingleObject[T, O](ctx, c, ns, obj.GetLabels())
}
if err == nil && existing != nil {
if update != nil {
update(existing)
if err := c.Update(ctx, existing); err != nil {
return nil, err
}
}
return existing, nil
}
if err != nil && !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("failed to get object: %w", err)
}
if err := c.Create(ctx, obj); err != nil {
return nil, err
}
return obj, nil
}
// getSingleObject searches for k8s objects of type T
// (e.g. corev1.Service) with the given labels, and returns
// it. Returns nil if no objects match the labels, and an error if
// more than one object matches.
func getSingleObject[T any, O ptrObject[T]](ctx context.Context, c client.Client, ns string, labels map[string]string) (O, error) {
ret := O(new(T))
kinds, _, err := c.Scheme().ObjectKinds(ret)
if err != nil {
return nil, err
}
if len(kinds) != 1 {
// TODO: the runtime package apparently has a "pick the best
// GVK" function somewhere that might be good enough?
return nil, fmt.Errorf("more than 1 GroupVersionKind for %T", ret)
}
gvk := kinds[0]
gvk.Kind += "List"
lst := unstructured.UnstructuredList{}
lst.SetGroupVersionKind(gvk)
if err := c.List(ctx, &lst, client.InNamespace(ns), client.MatchingLabels(labels)); err != nil {
return nil, err
}
if len(lst.Items) == 0 {
return nil, nil
}
if len(lst.Items) > 1 {
return nil, fmt.Errorf("found multiple matching %T objects", ret)
}
if err := c.Scheme().Convert(&lst.Items[0], ret, nil); err != nil {
return nil, err
}
return ret, nil
}
func defaultBool(envName string, defVal bool) bool {
vs := os.Getenv(envName)
if vs == "" {
return defVal
}
v, _ := opt.Bool(vs).Get()
return v
}
func defaultEnv(envName, defVal string) string {
v := os.Getenv(envName)
if v == "" {
return defVal
}
return v
}
func nameForService(svc *corev1.Service) (string, error) {
if h, ok := svc.Annotations[AnnotationHostname]; ok {
if err := dnsname.ValidLabel(h); err != nil {
return "", fmt.Errorf("invalid Tailscale hostname %q: %w", h, err)
}
return h, nil
}
return svc.Namespace + "-" + svc.Name, nil
}

185
cmd/k8s-operator/svc.go Normal file
View File

@@ -0,0 +1,185 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"context"
"fmt"
"strings"
"go.uber.org/zap"
"golang.org/x/exp/slices"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
type ServiceReconciler struct {
client.Client
ssr *tailscaleSTSReconciler
logger *zap.SugaredLogger
}
func childResourceLabels(parent *corev1.Service) map[string]string {
// You might wonder why we're using owner references, since they seem to be
// built for exactly this. Unfortunately, Kubernetes does not support
// cross-namespace ownership, by design. This means we cannot make the
// service being exposed the owner of the implementation details of the
// proxying. Instead, we have to do our own filtering and tracking with
// labels.
return map[string]string{
LabelManaged: "true",
LabelParentName: parent.GetName(),
LabelParentNamespace: parent.GetNamespace(),
LabelParentType: "svc",
}
}
func (a *ServiceReconciler) Reconcile(ctx context.Context, req reconcile.Request) (_ reconcile.Result, err error) {
logger := a.logger.With("service-ns", req.Namespace, "service-name", req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
svc := new(corev1.Service)
err = a.Get(ctx, req.NamespacedName, svc)
if apierrors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
logger.Debugf("service not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get svc: %w", err)
}
if !svc.DeletionTimestamp.IsZero() || !a.shouldExpose(svc) {
logger.Debugf("service is being deleted or should not be exposed, cleaning up")
return reconcile.Result{}, a.maybeCleanup(ctx, logger, svc)
}
return reconcile.Result{}, a.maybeProvision(ctx, logger, svc)
}
// maybeCleanup removes any existing resources related to serving svc over tailscale.
//
// This function is responsible for removing the finalizer from the service,
// once all associated resources are gone.
func (a *ServiceReconciler) maybeCleanup(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) error {
ix := slices.Index(svc.Finalizers, FinalizerName)
if ix < 0 {
logger.Debugf("no finalizer, nothing to do")
return nil
}
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(svc)); err != nil {
return fmt.Errorf("failed to cleanup: %w", err)
} else if !done {
logger.Debugf("cleanup not done yet, waiting for next reconcile")
return nil
}
svc.Finalizers = append(svc.Finalizers[:ix], svc.Finalizers[ix+1:]...)
if err := a.Update(ctx, svc); err != nil {
return fmt.Errorf("failed to remove finalizer: %w", err)
}
// Unlike most log entries in the reconcile loop, this will get printed
// exactly once at the very end of cleanup, because the final step of
// cleanup removes the tailscale finalizer, which will make all future
// reconciles exit early.
logger.Infof("unexposed service from tailnet")
return nil
}
// maybeProvision ensures that svc is exposed over tailscale, taking any actions
// necessary to reach that state.
//
// This function adds a finalizer to svc, ensuring that we can handle orderly
// deprovisioning later.
func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) error {
hostname, err := nameForService(svc)
if err != nil {
return err
}
if !slices.Contains(svc.Finalizers, FinalizerName) {
// This log line is printed exactly once during initial provisioning,
// because once the finalizer is in place this block gets skipped. So,
// this is a nice place to tell the operator that the high level,
// multi-reconcile operation is underway.
logger.Infof("exposing service over tailscale")
svc.Finalizers = append(svc.Finalizers, FinalizerName)
if err := a.Update(ctx, svc); err != nil {
return fmt.Errorf("failed to add finalizer: %w", err)
}
}
crl := childResourceLabels(svc)
var tags []string
if tstr, ok := svc.Annotations[AnnotationTags]; ok {
tags = strings.Split(tstr, ",")
}
sts := &tailscaleSTSConfig{
ParentResourceName: svc.Name,
ParentResourceUID: string(svc.UID),
TargetIP: svc.Spec.ClusterIP,
Hostname: hostname,
Tags: tags,
ChildResourceLabels: crl,
}
if err := a.ssr.Provision(ctx, logger, sts); err != nil {
return fmt.Errorf("failed to provision: %w", err)
}
if !a.hasLoadBalancerClass(svc) {
logger.Debugf("service is not a LoadBalancer, so not updating ingress")
return nil
}
_, tsHost, err := a.ssr.DeviceInfo(ctx, crl)
if err != nil {
return fmt.Errorf("failed to get device ID: %w", err)
}
if tsHost == "" {
logger.Debugf("no Tailscale hostname known yet, waiting for proxy pod to finish auth")
// No hostname yet. Wait for the proxy pod to auth.
svc.Status.LoadBalancer.Ingress = nil
if err := a.Status().Update(ctx, svc); err != nil {
return fmt.Errorf("failed to update service status: %w", err)
}
return nil
}
logger.Debugf("setting ingress hostname to %q", tsHost)
svc.Status.LoadBalancer.Ingress = []corev1.LoadBalancerIngress{
{
Hostname: tsHost,
},
}
if err := a.Status().Update(ctx, svc); err != nil {
return fmt.Errorf("failed to update service status: %w", err)
}
return nil
}
func (a *ServiceReconciler) shouldExpose(svc *corev1.Service) bool {
// Headless services can't be exposed, since there is no ClusterIP to
// forward to.
if svc.Spec.ClusterIP == "" || svc.Spec.ClusterIP == "None" {
return false
}
return a.hasLoadBalancerClass(svc) || a.hasAnnotation(svc)
}
func (a *ServiceReconciler) hasLoadBalancerClass(svc *corev1.Service) bool {
return svc != nil &&
svc.Spec.Type == corev1.ServiceTypeLoadBalancer &&
svc.Spec.LoadBalancerClass != nil &&
*svc.Spec.LoadBalancerClass == "tailscale"
}
func (a *ServiceReconciler) hasAnnotation(svc *corev1.Service) bool {
return svc != nil &&
svc.Annotations[AnnotationExpose] == "true"
}

View File

@@ -35,14 +35,13 @@ import (
"net/http"
"net/netip"
"os"
"slices"
"strconv"
"strings"
"time"
"github.com/dsnet/try"
jsonv2 "github.com/go-json-experiment/json"
"golang.org/x/exp/maps"
"golang.org/x/exp/slices"
"tailscale.com/types/logid"
"tailscale.com/types/netlogtype"
"tailscale.com/util/cmpx"
@@ -315,8 +314,8 @@ func mustMakeNamesByAddr() map[netip.Addr]string {
namesByAddr := make(map[netip.Addr]string)
retry:
for i := 0; i < 10; i++ {
maps.Clear(seen)
maps.Clear(namesByAddr)
clear(seen)
clear(namesByAddr)
for _, d := range m.Devices {
name := fieldPrefix(d.Name, i)
if seen[name] {

View File

@@ -19,7 +19,6 @@ import (
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"os"
"path/filepath"
@@ -149,7 +148,7 @@ func getHostKeys(dir string) (ret []ssh.Signer, err error) {
func hostKeyFileOrCreate(keyDir, typ string) ([]byte, error) {
path := filepath.Join(keyDir, "ssh_host_"+typ+"_key")
v, err := ioutil.ReadFile(path)
v, err := os.ReadFile(path)
if err == nil {
return v, nil
}

View File

@@ -14,12 +14,12 @@ import (
"log"
"os"
"runtime"
"slices"
"strings"
"sync"
"text/tabwriter"
"github.com/peterbourgon/ff/v3/ffcli"
"golang.org/x/exp/slices"
"tailscale.com/client/tailscale"
"tailscale.com/envknob"
"tailscale.com/paths"
@@ -120,7 +120,7 @@ change in the future.
pingCmd,
ncCmd,
sshCmd,
funnelCmd,
funnelCmd(),
serveCmd,
versionCmd,
webCmd,

View File

@@ -11,10 +11,10 @@ import (
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"golang.org/x/exp/slices"
"k8s.io/client-go/util/homedir"
"sigs.k8s.io/yaml"
"tailscale.com/version"

View File

@@ -8,14 +8,13 @@ import (
"errors"
"flag"
"fmt"
"os"
"slices"
"strings"
"text/tabwriter"
"github.com/peterbourgon/ff/v3/ffcli"
"golang.org/x/exp/maps"
"golang.org/x/exp/slices"
xmaps "golang.org/x/exp/maps"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
"tailscale.com/util/cmpx"
@@ -182,7 +181,7 @@ func filterFormatAndSortExitNodes(peers []*ipnstate.PeerStatus, filterBy string)
}
filteredExitNodes := filteredExitNodes{
Countries: maps.Values(countries),
Countries: xmaps.Values(countries),
}
for _, country := range filteredExitNodes.Countries {

View File

@@ -9,18 +9,27 @@ import (
"fmt"
"net"
"os"
"slices"
"strconv"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"golang.org/x/exp/slices"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
"tailscale.com/util/mak"
)
var funnelCmd = newFunnelCommand(&serveEnv{lc: &localClient})
var funnelCmd = func() *ffcli.Command {
se := &serveEnv{lc: &localClient}
// This flag is used to switch to an in-development
// implementation of the tailscale funnel command.
// See https://github.com/tailscale/tailscale/issues/7844
if os.Getenv("TAILSCALE_FUNNEL_DEV") == "on" {
return newFunnelDevCommand(se)
}
return newFunnelCommand(se)
}
// newFunnelCommand returns a new "funnel" subcommand using e as its environment.
// The funnel subcommand is used to turn on/off the Funnel service.
@@ -35,7 +44,7 @@ func newFunnelCommand(e *serveEnv) *ffcli.Command {
ShortHelp: "Turn on/off Funnel service",
ShortUsage: strings.Join([]string{
"funnel <serve-port> {on|off}",
"funnel status [--json]",
"funnel status [--json] [--memory]",
}, "\n "),
LongHelp: strings.Join([]string{
"Funnel allows you to publish a 'tailscale serve'",
@@ -53,6 +62,7 @@ func newFunnelCommand(e *serveEnv) *ffcli.Command {
ShortHelp: "show current serve/funnel status",
FlagSet: e.newFlags("funnel-status", func(fs *flag.FlagSet) {
fs.BoolVar(&e.json, "json", false, "output JSON")
fs.BoolVar(&e.memory, "memory", false, "in memory config")
}),
UsageFunc: usageFunc,
},
@@ -83,7 +93,7 @@ func (e *serveEnv) runFunnel(ctx context.Context, args []string) error {
if sc == nil {
sc = new(ipn.ServeConfig)
}
st, err := e.getLocalClientStatus(ctx)
st, err := e.getLocalClientStatusWithoutPeers(ctx)
if err != nil {
return fmt.Errorf("getting client status: %w", err)
}
@@ -146,7 +156,7 @@ func (e *serveEnv) verifyFunnelEnabled(ctx context.Context, st *ipnstate.Status,
return nil // already enabled
}
enableErr := e.enableFeatureInteractive(ctx, "funnel", hasFunnelAttrs)
st, statusErr := e.getLocalClientStatus(ctx) // get updated status; interactive flow may block
st, statusErr := e.getLocalClientStatusWithoutPeers(ctx) // get updated status; interactive flow may block
switch {
case statusErr != nil:
return fmt.Errorf("getting client status: %w", statusErr)

View File

@@ -0,0 +1,112 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package cli
import (
"context"
"flag"
"fmt"
"io"
"os"
"strconv"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/ipn"
)
// newFunnelDevCommand returns a new "funnel" subcommand using e as its environment.
// The funnel subcommand is used to turn on/off the Funnel service.
// Funnel is off by default.
// Funnel allows you to publish a 'tailscale serve' server publicly,
// open to the entire internet.
// newFunnelCommand shares the same serveEnv as the "serve" subcommand.
// See newServeCommand and serve.go for more details.
func newFunnelDevCommand(e *serveEnv) *ffcli.Command {
return &ffcli.Command{
Name: "funnel",
ShortHelp: "Turn on/off Funnel service",
ShortUsage: strings.Join([]string{
"funnel <port>",
"funnel status [--json]",
}, "\n "),
LongHelp: strings.Join([]string{
"Funnel allows you to expose your local",
"server publicly to the entire internet.",
"Note that it only supports https servers at this point.",
"This command is in development and is unsupported",
}, "\n"),
Exec: e.runFunnelDev,
UsageFunc: usageFunc,
Subcommands: []*ffcli.Command{
{
Name: "status",
Exec: e.runServeStatus,
ShortHelp: "show current serve/Funnel status",
FlagSet: e.newFlags("funnel-status", func(fs *flag.FlagSet) {
fs.BoolVar(&e.json, "json", false, "output JSON")
}),
UsageFunc: usageFunc,
},
},
}
}
// runFunnelDev is the entry point for the "tailscale funnel" subcommand and
// manages turning on/off Funnel. Funnel is off by default.
//
// Note: funnel is only supported on single DNS name for now. (2023-08-18)
func (e *serveEnv) runFunnelDev(ctx context.Context, args []string) error {
if len(args) != 1 {
return flag.ErrHelp
}
var source string
port64, err := strconv.ParseUint(args[0], 10, 16)
if err == nil {
source = fmt.Sprintf("http://127.0.0.1:%d", port64)
} else {
source, err = expandProxyTarget(args[0])
}
if err != nil {
return err
}
st, err := e.getLocalClientStatusWithoutPeers(ctx)
if err != nil {
return fmt.Errorf("getting client status: %w", err)
}
if err := e.verifyFunnelEnabled(ctx, st, 443); err != nil {
return err
}
dnsName := strings.TrimSuffix(st.Self.DNSName, ".")
hp := ipn.HostPort(dnsName + ":443") // TODO(marwan-at-work): support the 2 other ports
// In the streaming case, the process stays running in the
// foreground and prints out connections to the HostPort.
//
// The local backend handles updating the ServeConfig as
// necessary, then restores it to its original state once
// the process's context is closed or the client turns off
// Tailscale.
return e.streamServe(ctx, ipn.ServeStreamRequest{
HostPort: hp,
Source: source,
MountPoint: "/", // TODO(marwan-at-work): support multiple mount points
})
}
func (e *serveEnv) streamServe(ctx context.Context, req ipn.ServeStreamRequest) error {
stream, err := e.lc.StreamServe(ctx, req)
if err != nil {
return err
}
defer stream.Close()
fmt.Fprintf(os.Stderr, "Funnel started on \"https://%s\".\n", strings.TrimSuffix(string(req.HostPort), ":443"))
fmt.Fprintf(os.Stderr, "Press Ctrl-C to stop Funnel.\n\n")
_, err = io.Copy(os.Stdout, stream)
return err
}

View File

@@ -18,12 +18,12 @@ import (
"path/filepath"
"reflect"
"runtime"
"slices"
"sort"
"strconv"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"golang.org/x/exp/slices"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
@@ -129,12 +129,14 @@ func (e *serveEnv) newFlags(name string, setup func(fs *flag.FlagSet)) *flag.Fla
//
// The purpose of this interface is to allow tests to provide a mock.
type localServeClient interface {
Status(context.Context) (*ipnstate.Status, error)
StatusWithoutPeers(context.Context) (*ipnstate.Status, error)
GetServeConfig(context.Context) (*ipn.ServeConfig, error)
GetMemoryServeConfig(context.Context) (*ipn.ServeConfig, error)
SetServeConfig(context.Context, *ipn.ServeConfig) error
QueryFeature(ctx context.Context, feature string) (*tailcfg.QueryFeatureResponse, error)
WatchIPNBus(ctx context.Context, mask ipn.NotifyWatchOpt) (*tailscale.IPNBusWatcher, error)
IncrementCounter(ctx context.Context, name string, delta int) error
StreamServe(ctx context.Context, req ipn.ServeStreamRequest) (io.ReadCloser, error) // TODO: testing :)
}
// serveEnv is the environment the serve command runs within. All I/O should be
@@ -145,7 +147,8 @@ type localServeClient interface {
// It also contains the flags, as registered with newServeCommand.
type serveEnv struct {
// flags
json bool // output JSON (status only for now)
json bool // output JSON (status only for now)
memory bool // output memory (status only for now)
lc localServeClient // localClient interface, specific to serve
@@ -158,19 +161,21 @@ type serveEnv struct {
// The trailing dot is removed.
// Returns an error if local client status fails.
func (e *serveEnv) getSelfDNSName(ctx context.Context) (string, error) {
st, err := e.getLocalClientStatus(ctx)
st, err := e.getLocalClientStatusWithoutPeers(ctx)
if err != nil {
return "", fmt.Errorf("getting client status: %w", err)
}
return strings.TrimSuffix(st.Self.DNSName, "."), nil
}
// getLocalClientStatus returns the Status of the local client.
// getLocalClientStatusWithoutPeers returns the Status of the local client
// without any peers in the response.
//
// Returns error if unable to reach tailscaled or if self node is nil.
//
// Exits if status is not running or starting.
func (e *serveEnv) getLocalClientStatus(ctx context.Context) (*ipnstate.Status, error) {
st, err := e.lc.Status(ctx)
func (e *serveEnv) getLocalClientStatusWithoutPeers(ctx context.Context) (*ipnstate.Status, error) {
st, err := e.lc.StatusWithoutPeers(ctx)
if err != nil {
return nil, fixTailscaledConnectError(err)
}
@@ -623,7 +628,13 @@ func (e *serveEnv) handleTCPServeRemove(ctx context.Context, src uint16) error {
// - tailscale status
// - tailscale status --json
func (e *serveEnv) runServeStatus(ctx context.Context, args []string) error {
sc, err := e.lc.GetServeConfig(ctx)
var sc *ipn.ServeConfig
var err error
if e.memory {
sc, err = e.lc.GetMemoryServeConfig(ctx)
} else {
sc, err = e.lc.GetServeConfig(ctx)
}
if err != nil {
return err
}
@@ -641,7 +652,7 @@ func (e *serveEnv) runServeStatus(ctx context.Context, args []string) error {
printf("No serve config\n")
return nil
}
st, err := e.getLocalClientStatus(ctx)
st, err := e.getLocalClientStatusWithoutPeers(ctx)
if err != nil {
return err
}
@@ -849,8 +860,8 @@ func (e *serveEnv) enableFeatureInteractive(ctx context.Context, feature string,
e.lc.IncrementCounter(ctx, fmt.Sprintf("%s_enablement_lost_connection", feature), 1)
return err
}
if nm := n.NetMap; nm != nil && nm.SelfNode != nil {
if hasRequiredCapabilities(nm.SelfNode.Capabilities) {
if nm := n.NetMap; nm != nil && nm.SelfNode.Valid() {
if hasRequiredCapabilities(nm.SelfNode.Capabilities().AsSlice()) {
e.lc.IncrementCounter(ctx, fmt.Sprintf("%s_enabled", feature), 1)
fmt.Fprintln(os.Stdout, "Success.")
return nil

View File

@@ -9,6 +9,7 @@ import (
"errors"
"flag"
"fmt"
"io"
"os"
"path/filepath"
"reflect"
@@ -810,7 +811,7 @@ func TestVerifyFunnelEnabled(t *testing.T) {
defer func() { fakeStatus.Self.Capabilities = oldCaps }() // reset after test
fakeStatus.Self.Capabilities = tt.caps
}
st, err := e.getLocalClientStatus(ctx)
st, err := e.getLocalClientStatusWithoutPeers(ctx)
if err != nil {
t.Fatal(err)
}
@@ -861,7 +862,7 @@ var fakeStatus = &ipnstate.Status{
},
}
func (lc *fakeLocalServeClient) Status(ctx context.Context) (*ipnstate.Status, error) {
func (lc *fakeLocalServeClient) StatusWithoutPeers(ctx context.Context) (*ipnstate.Status, error) {
return fakeStatus, nil
}
@@ -900,6 +901,11 @@ func (lc *fakeLocalServeClient) IncrementCounter(ctx context.Context, name strin
return nil // unused in tests
}
func (lc *fakeLocalServeClient) StreamServe(ctx context.Context, req ipn.ServeStreamRequest) (io.ReadCloser, error) {
// TODO: testing :)
return nil, nil
}
// exactError returns an error checker that wants exactly the provided want error.
// If optName is non-empty, it's used in the error message.
func exactErr(want error, optName ...string) func(error) string {

View File

@@ -15,6 +15,7 @@ import (
"tailscale.com/net/netutil"
"tailscale.com/net/tsaddr"
"tailscale.com/safesocket"
"tailscale.com/types/views"
)
var setCmd = &ffcli.Command{
@@ -171,7 +172,7 @@ func calcAdvertiseRoutesForSet(advertiseExitNodeSet, advertiseRoutesSet bool, cu
if alreadyAdvertisesExitNode == setArgs.advertiseDefaultRoute {
return curPrefs.AdvertiseRoutes, nil
}
routes = tsaddr.FilterPrefixesCopy(curPrefs.AdvertiseRoutes, func(p netip.Prefix) bool {
routes = tsaddr.FilterPrefixesCopy(views.SliceOf(curPrefs.AdvertiseRoutes), func(p netip.Prefix) bool {
return p.Bits() != 0
})
if setArgs.advertiseDefaultRoute {

View File

@@ -23,6 +23,7 @@ import (
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
"tailscale.com/net/interfaces"
"tailscale.com/util/cmpx"
"tailscale.com/util/dnsname"
)
@@ -308,12 +309,20 @@ func dnsOrQuoteHostname(st *ipnstate.Status, ps *ipnstate.PeerStatus) string {
}
func ownerLogin(st *ipnstate.Status, ps *ipnstate.PeerStatus) string {
if ps.UserID.IsZero() {
// We prioritize showing the name of the sharer as the owner of a node if
// it's different from the node's user. This is less surprising: if user B
// from a company shares user's C node from the same company with user A who
// don't know user C, user A might be surprised to see user C listed in
// their netmap. We've historically (2021-01..2023-08) always shown the
// sharer's name in the UI. Perhaps we want to show both here? But the CLI's
// a bit space constrained.
uid := cmpx.Or(ps.AltSharerUserID, ps.UserID)
if uid.IsZero() {
return "-"
}
u, ok := st.User[ps.UserID]
u, ok := st.User[uid]
if !ok {
return fmt.Sprint(ps.UserID)
return fmt.Sprint(uid)
}
if i := strings.Index(u.LoginName, "@"); i != -1 {
return u.LoginName[:i+1]

View File

@@ -78,7 +78,7 @@ func runWeb(ctx context.Context, args []string) error {
return fmt.Errorf("too many non-flag arguments: %q", args)
}
webServer, cleanup := web.NewServer(webArgs.dev, nil)
webServer, cleanup := web.NewServer(webArgs.dev, &localClient)
defer cleanup()
if webArgs.cgi {

View File

@@ -11,7 +11,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
W github.com/alexbrainman/sspi/internal/common from github.com/alexbrainman/sspi/negotiate
W 💣 github.com/alexbrainman/sspi/negotiate from tailscale.com/net/tshttpproxy
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
W 💣 github.com/dblohm7/wingoes from tailscale.com/util/winutil/authenticode
W 💣 github.com/dblohm7/wingoes from tailscale.com/util/winutil/authenticode+
W 💣 github.com/dblohm7/wingoes/pe from tailscale.com/util/winutil/authenticode
github.com/fxamacker/cbor/v2 from tailscale.com/tka
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
@@ -22,6 +22,8 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
L github.com/google/nftables/internal/parseexprfunc from github.com/google/nftables+
L github.com/google/nftables/xt from github.com/google/nftables/expr+
github.com/google/uuid from tailscale.com/util/quarantine+
github.com/gorilla/csrf from tailscale.com/client/web
github.com/gorilla/securecookie from github.com/gorilla/csrf
github.com/hdevalence/ed25519consensus from tailscale.com/tka
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/interfaces+
@@ -38,6 +40,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
💣 github.com/mitchellh/go-ps from tailscale.com/cmd/tailscale/cli+
github.com/peterbourgon/ff/v3 from github.com/peterbourgon/ff/v3/ffcli
github.com/peterbourgon/ff/v3/ffcli from tailscale.com/cmd/tailscale/cli
github.com/pkg/errors from github.com/gorilla/csrf
github.com/skip2/go-qrcode from tailscale.com/cmd/tailscale/cli
github.com/skip2/go-qrcode/bitset from github.com/skip2/go-qrcode+
github.com/skip2/go-qrcode/reedsolomon from github.com/skip2/go-qrcode
@@ -168,9 +171,8 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
golang.org/x/crypto/nacl/secretbox from golang.org/x/crypto/nacl/box
golang.org/x/crypto/pbkdf2 from software.sslmate.com/src/go-pkcs12
golang.org/x/crypto/salsa20/salsa from golang.org/x/crypto/nacl/box+
golang.org/x/exp/constraints from golang.org/x/exp/slices+
golang.org/x/exp/maps from tailscale.com/types/views+
golang.org/x/exp/slices from tailscale.com/net/tsaddr+
W golang.org/x/exp/constraints from github.com/dblohm7/wingoes/pe
golang.org/x/exp/maps from tailscale.com/cmd/tailscale/cli
golang.org/x/net/bpf from github.com/mdlayher/netlink+
golang.org/x/net/dns/dnsmessage from net+
golang.org/x/net/http/httpguts from net/http+
@@ -199,6 +201,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
golang.org/x/time/rate from tailscale.com/cmd/tailscale/cli+
bufio from compress/flate+
bytes from bufio+
cmp from slices
compress/flate from compress/gzip+
compress/gzip from net/http
compress/zlib from image/png+
@@ -234,6 +237,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
encoding/base32 from tailscale.com/tka+
encoding/base64 from encoding/json+
encoding/binary from compress/gzip+
encoding/gob from github.com/gorilla/securecookie
encoding/hex from crypto/x509+
encoding/json from expvar+
encoding/pem from crypto/tls+
@@ -247,7 +251,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
hash/crc32 from compress/gzip+
hash/maphash from go4.org/mem
html from tailscale.com/ipn/ipnstate+
html/template from tailscale.com/client/web
html/template from tailscale.com/client/web+
image from github.com/skip2/go-qrcode+
image/color from github.com/skip2/go-qrcode+
image/png from github.com/skip2/go-qrcode
@@ -256,6 +260,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
io/ioutil from golang.org/x/sys/cpu+
log from expvar+
log/internal from log
maps from tailscale.com/types/views+
math from compress/flate+
math/big from crypto/dsa+
math/bits from compress/flate+
@@ -282,6 +287,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
regexp from github.com/tailscale/goupnp/httpu+
regexp/syntax from regexp
runtime/debug from tailscale.com/util/singleflight+
slices from tailscale.com/cmd/tailscale/cli+
sort from compress/flate+
strconv from compress/flate+
strings from bufio+

View File

@@ -82,13 +82,13 @@ func runMonitor(ctx context.Context, loop bool) error {
}
defer mon.Close()
mon.RegisterChangeCallback(func(changed bool, st *interfaces.State) {
if !changed {
log.Printf("Network monitor fired; no change")
mon.RegisterChangeCallback(func(delta *netmon.ChangeDelta) {
if !delta.Major {
log.Printf("Network monitor fired; not a major change")
return
}
log.Printf("Network monitor fired. New state:")
dump(st)
dump(delta.New)
})
if loop {
log.Printf("Starting link change monitor; initial state:")

View File

@@ -93,6 +93,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
L github.com/google/nftables/expr from github.com/google/nftables+
L github.com/google/nftables/internal/parseexprfunc from github.com/google/nftables+
L github.com/google/nftables/xt from github.com/google/nftables/expr+
github.com/google/uuid from tailscale.com/ipn/ipnlocal
github.com/hdevalence/ed25519consensus from tailscale.com/tka
L 💣 github.com/illarion/gonotify from tailscale.com/net/dns
L github.com/insomniacslk/dhcp/dhcpv4 from tailscale.com/net/tstun
@@ -242,7 +243,6 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
tailscale.com/ipn/store/mem from tailscale.com/ipn/store+
L tailscale.com/kube from tailscale.com/ipn/store/kubestore
tailscale.com/log/filelogger from tailscale.com/logpolicy
tailscale.com/log/logheap from tailscale.com/control/controlclient
tailscale.com/log/sockstatlog from tailscale.com/ipn/ipnlocal
tailscale.com/logpolicy from tailscale.com/cmd/tailscaled+
tailscale.com/logtail from tailscale.com/control/controlclient+
@@ -325,7 +325,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
💣 tailscale.com/util/deephash from tailscale.com/ipn/ipnlocal+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics+
tailscale.com/util/dnsname from tailscale.com/hostinfo+
tailscale.com/util/goroutines from tailscale.com/control/controlclient+
tailscale.com/util/goroutines from tailscale.com/ipn/ipnlocal
tailscale.com/util/groupmember from tailscale.com/ipn/ipnauth
💣 tailscale.com/util/hashx from tailscale.com/util/deephash
tailscale.com/util/httpm from tailscale.com/client/tailscale+
@@ -380,9 +380,8 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
golang.org/x/crypto/poly1305 from github.com/tailscale/golang-x-crypto/ssh+
golang.org/x/crypto/salsa20/salsa from golang.org/x/crypto/nacl/box+
LD golang.org/x/crypto/ssh from tailscale.com/ssh/tailssh+
golang.org/x/exp/constraints from golang.org/x/exp/slices+
golang.org/x/exp/maps from tailscale.com/wgengine+
golang.org/x/exp/slices from tailscale.com/ipn/ipnlocal+
golang.org/x/exp/constraints from github.com/dblohm7/wingoes/pe+
golang.org/x/exp/maps from tailscale.com/wgengine/magicsock
golang.org/x/net/bpf from github.com/mdlayher/genetlink+
golang.org/x/net/dns/dnsmessage from net+
golang.org/x/net/http/httpguts from golang.org/x/net/http2+
@@ -440,6 +439,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
crypto/tls from github.com/tcnksm/go-httpstat+
crypto/x509 from crypto/tls+
crypto/x509/pkix from crypto/x509+
database/sql/driver from github.com/google/uuid
W debug/dwarf from debug/pe
W debug/pe from github.com/dblohm7/wingoes/pe
embed from tailscale.com+
@@ -468,6 +468,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
log from expvar+
log/internal from log
LD log/syslog from tailscale.com/ssh/tailssh
maps from tailscale.com/types/views+
math from compress/flate+
math/big from crypto/dsa+
math/bits from compress/flate+
@@ -495,9 +496,9 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
regexp from github.com/coreos/go-iptables/iptables+
regexp/syntax from regexp
runtime/debug from github.com/klauspost/compress/zstd+
runtime/pprof from tailscale.com/log/logheap+
runtime/pprof from net/http/pprof+
runtime/trace from net/http/pprof
slices from tailscale.com/wgengine/magicsock
slices from tailscale.com/wgengine/magicsock+
sort from compress/flate+
strconv from compress/flate+
strings from bufio+

View File

@@ -22,7 +22,7 @@ import (
"strings"
"time"
"golang.org/x/exp/maps"
xmaps "golang.org/x/exp/maps"
"tailscale.com/cmd/testwrapper/flakytest"
)
@@ -270,7 +270,7 @@ func main() {
if len(toRetry) == 0 {
continue
}
pkgs := maps.Keys(toRetry)
pkgs := xmaps.Keys(toRetry)
sort.Strings(pkgs)
nextRun := &nextRun{
attempt: thisRun.attempt + 1,

View File

@@ -12,11 +12,11 @@ import (
"path"
"path/filepath"
"runtime"
"slices"
"strconv"
"time"
esbuild "github.com/evanw/esbuild/pkg/api"
"golang.org/x/exp/slices"
)
const (

View File

@@ -257,24 +257,28 @@ func (i *jsIPN) run(jsCallbacks js.Value) {
},
MachineStatus: jsMachineStatus[nm.MachineStatus],
},
Peers: mapSlice(nm.Peers, func(p *tailcfg.Node) jsNetMapPeerNode {
name := p.Name
Peers: mapSlice(nm.Peers, func(p tailcfg.NodeView) jsNetMapPeerNode {
name := p.Name()
if name == "" {
// In practice this should only happen for Hello.
name = p.Hostinfo.Hostname()
name = p.Hostinfo().Hostname()
}
addrs := make([]string, p.Addresses().Len())
for i := range p.Addresses().LenIter() {
addrs[i] = p.Addresses().At(i).Addr().String()
}
return jsNetMapPeerNode{
jsNetMapNode: jsNetMapNode{
Name: name,
Addresses: mapSlice(p.Addresses, func(a netip.Prefix) string { return a.Addr().String() }),
MachineKey: p.Machine.String(),
NodeKey: p.Key.String(),
Addresses: addrs,
MachineKey: p.Machine().String(),
NodeKey: p.Key().String(),
},
Online: p.Online,
TailscaleSSHEnabled: p.Hostinfo.TailscaleSSHEnabled(),
Online: p.Online(),
TailscaleSSHEnabled: p.Hostinfo().TailscaleSSHEnabled(),
}
}),
LockedOut: nm.TKAEnabled && len(nm.SelfNode.KeySignature) == 0,
LockedOut: nm.TKAEnabled && nm.SelfNode.KeySignature().Len() == 0,
}
if jsonNetMap, err := json.Marshal(jsNetMap); err == nil {
jsCallbacks.Call("notifyNetMap", string(jsonNetMap))

View File

@@ -6,7 +6,10 @@
package tests
import (
"maps"
"net/netip"
"tailscale.com/types/ptr"
)
// Clone makes a deep copy of StructWithPtrs.
@@ -18,12 +21,10 @@ func (src *StructWithPtrs) Clone() *StructWithPtrs {
dst := new(StructWithPtrs)
*dst = *src
if dst.Value != nil {
dst.Value = new(StructWithoutPtrs)
*dst.Value = *src.Value
dst.Value = ptr.To(*src.Value)
}
if dst.Int != nil {
dst.Int = new(int)
*dst.Int = *src.Int
dst.Int = ptr.To(*src.Int)
}
return dst
}
@@ -60,12 +61,7 @@ func (src *Map) Clone() *Map {
}
dst := new(Map)
*dst = *src
if dst.Int != nil {
dst.Int = map[string]int{}
for k, v := range src.Int {
dst.Int[k] = v
}
}
dst.Int = maps.Clone(src.Int)
if dst.SliceInt != nil {
dst.SliceInt = map[string][]int{}
for k := range src.SliceInt {
@@ -84,12 +80,7 @@ func (src *Map) Clone() *Map {
dst.StructPtrWithoutPtr[k] = v.Clone()
}
}
if dst.StructWithoutPtr != nil {
dst.StructWithoutPtr = map[string]StructWithoutPtrs{}
for k, v := range src.StructWithoutPtr {
dst.StructWithoutPtr[k] = v
}
}
dst.StructWithoutPtr = maps.Clone(src.StructWithoutPtr)
if dst.SlicesWithPtrs != nil {
dst.SlicesWithPtrs = map[string][]*StructWithPtrs{}
for k := range src.SlicesWithPtrs {
@@ -102,35 +93,19 @@ func (src *Map) Clone() *Map {
dst.SlicesWithoutPtrs[k] = append([]*StructWithoutPtrs{}, src.SlicesWithoutPtrs[k]...)
}
}
if dst.StructWithoutPtrKey != nil {
dst.StructWithoutPtrKey = map[StructWithoutPtrs]int{}
for k, v := range src.StructWithoutPtrKey {
dst.StructWithoutPtrKey[k] = v
}
}
dst.StructWithoutPtrKey = maps.Clone(src.StructWithoutPtrKey)
if dst.SliceIntPtr != nil {
dst.SliceIntPtr = map[string][]*int{}
for k := range src.SliceIntPtr {
dst.SliceIntPtr[k] = append([]*int{}, src.SliceIntPtr[k]...)
}
}
if dst.PointerKey != nil {
dst.PointerKey = map[*string]int{}
for k, v := range src.PointerKey {
dst.PointerKey[k] = v
}
}
if dst.StructWithPtrKey != nil {
dst.StructWithPtrKey = map[StructWithPtrs]int{}
for k, v := range src.StructWithPtrKey {
dst.StructWithPtrKey[k] = v
}
}
dst.PointerKey = maps.Clone(src.PointerKey)
dst.StructWithPtrKey = maps.Clone(src.StructWithPtrKey)
if dst.StructWithPtr != nil {
dst.StructWithPtr = map[string]StructWithPtrs{}
for k, v := range src.StructWithPtr {
v2 := v.Clone()
dst.StructWithPtr[k] = *v2
dst.StructWithPtr[k] = *(v.Clone())
}
}
return dst
@@ -175,8 +150,7 @@ func (src *StructWithSlices) Clone() *StructWithSlices {
}
dst.Ints = make([]*int, len(src.Ints))
for i := range dst.Ints {
x := *src.Ints[i]
dst.Ints[i] = &x
dst.Ints[i] = ptr.To(*src.Ints[i])
}
dst.Slice = append(src.Slice[:0:0], src.Slice...)
dst.Prefixes = append(src.Prefixes[:0:0], src.Prefixes...)

View File

@@ -10,7 +10,6 @@ import (
"errors"
"net/netip"
"go4.org/mem"
"tailscale.com/types/views"
)
@@ -309,10 +308,10 @@ func (v StructWithSlicesView) StructPointers() views.SliceView[*StructWithPtrs,
func (v StructWithSlicesView) Structs() StructWithPtrs { panic("unsupported") }
func (v StructWithSlicesView) Ints() *int { panic("unsupported") }
func (v StructWithSlicesView) Slice() views.Slice[string] { return views.SliceOf(v.ж.Slice) }
func (v StructWithSlicesView) Prefixes() views.IPPrefixSlice {
return views.IPPrefixSliceOf(v.ж.Prefixes)
func (v StructWithSlicesView) Prefixes() views.Slice[netip.Prefix] {
return views.SliceOf(v.ж.Prefixes)
}
func (v StructWithSlicesView) Data() mem.RO { return mem.B(v.ж.Data) }
func (v StructWithSlicesView) Data() views.ByteSlice[[]byte] { return views.ByteSliceOf(v.ж.Data) }
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _StructWithSlicesViewNeedsRegeneration = StructWithSlices(struct {

View File

@@ -67,9 +67,7 @@ func (v *{{.ViewName}}) UnmarshalJSON(b []byte) error {
{{end}}
{{define "valueField"}}func (v {{.ViewName}}) {{.FieldName}}() {{.FieldType}} { return v.ж.{{.FieldName}} }
{{end}}
{{define "byteSliceField"}}func (v {{.ViewName}}) {{.FieldName}}() mem.RO { return mem.B(v.ж.{{.FieldName}}) }
{{end}}
{{define "ipPrefixSliceField"}}func (v {{.ViewName}}) {{.FieldName}}() views.IPPrefixSlice { return views.IPPrefixSliceOf(v.ж.{{.FieldName}}) }
{{define "byteSliceField"}}func (v {{.ViewName}}) {{.FieldName}}() views.ByteSlice[{{.FieldType}}] { return views.ByteSliceOf(v.ж.{{.FieldName}}) }
{{end}}
{{define "sliceField"}}func (v {{.ViewName}}) {{.FieldName}}() views.Slice[{{.FieldType}}] { return views.SliceOf(v.ж.{{.FieldName}}) }
{{end}}
@@ -171,15 +169,12 @@ func genView(buf *bytes.Buffer, it *codegen.ImportTracker, typ *types.Named, thi
case *types.Slice:
slice := underlying
elem := slice.Elem()
args.FieldType = it.QualifiedName(elem)
switch elem.String() {
case "byte":
it.Import("go4.org/mem")
args.FieldType = it.QualifiedName(fieldType)
writeTemplate("byteSliceField")
case "inet.af/netip.Prefix", "net/netip.Prefix":
it.Import("tailscale.com/types/views")
writeTemplate("ipPrefixSliceField")
default:
args.FieldType = it.QualifiedName(elem)
it.Import("tailscale.com/types/views")
shallow, deep, base := requiresCloning(elem)
if deep {

View File

@@ -1,40 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package controlclient
import (
"bytes"
"compress/gzip"
"context"
"log"
"net/http"
"time"
"tailscale.com/util/goroutines"
)
func dumpGoroutinesToURL(c *http.Client, targetURL string) {
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
zbuf := new(bytes.Buffer)
zw := gzip.NewWriter(zbuf)
zw.Write(goroutines.ScrubbedGoroutineDump(true))
zw.Close()
req, err := http.NewRequestWithContext(ctx, "PUT", targetURL, zbuf)
if err != nil {
log.Printf("dumpGoroutinesToURL: %v", err)
return
}
req.Header.Set("Content-Encoding", "gzip")
t0 := time.Now()
_, err = c.Do(req)
d := time.Since(t0).Round(time.Millisecond)
if err != nil {
log.Printf("dumpGoroutinesToURL error: %v to %v (after %v)", err, targetURL, d)
} else {
log.Printf("dumpGoroutinesToURL complete to %v (after %v)", targetURL, d)
}
}

View File

@@ -24,6 +24,7 @@ import (
"runtime"
"strings"
"sync"
"sync/atomic"
"time"
"go4.org/mem"
@@ -32,7 +33,6 @@ import (
"tailscale.com/health"
"tailscale.com/hostinfo"
"tailscale.com/ipn/ipnstate"
"tailscale.com/log/logheap"
"tailscale.com/logtail"
"tailscale.com/net/dnscache"
"tailscale.com/net/dnsfallback"
@@ -61,26 +61,25 @@ import (
// Direct is the client that connects to a tailcontrol server for a node.
type Direct struct {
httpc *http.Client // HTTP client used to talk to tailcontrol
dialer *tsdial.Dialer
dnsCache *dnscache.Resolver
serverURL string // URL of the tailcontrol server
clock tstime.Clock
lastPrintMap time.Time
newDecompressor func() (Decompressor, error)
keepAlive bool
logf logger.Logf
netMon *netmon.Monitor // or nil
discoPubKey key.DiscoPublic
getMachinePrivKey func() (key.MachinePrivate, error)
debugFlags []string
keepSharerAndUserSplit bool
skipIPForwardingCheck bool
pinger Pinger
popBrowser func(url string) // or nil
c2nHandler http.Handler // or nil
onClientVersion func(*tailcfg.ClientVersion) // or nil
onControlTime func(time.Time) // or nil
httpc *http.Client // HTTP client used to talk to tailcontrol
dialer *tsdial.Dialer
dnsCache *dnscache.Resolver
serverURL string // URL of the tailcontrol server
clock tstime.Clock
lastPrintMap time.Time
newDecompressor func() (Decompressor, error)
keepAlive bool
logf logger.Logf
netMon *netmon.Monitor // or nil
discoPubKey key.DiscoPublic
getMachinePrivKey func() (key.MachinePrivate, error)
debugFlags []string
skipIPForwardingCheck bool
pinger Pinger
popBrowser func(url string) // or nil
c2nHandler http.Handler // or nil
onClientVersion func(*tailcfg.ClientVersion) // or nil
onControlTime func(time.Time) // or nil
dialPlan ControlDialPlanner // can be nil
@@ -94,7 +93,7 @@ type Direct struct {
persist persist.PersistView
authKey string
tryingNewKey key.NodePrivate
expiry *time.Time
expiry time.Time // or zero value if none/unknown
hostinfo *tailcfg.Hostinfo // always non-nil
netinfo *tailcfg.NetInfo
endpoints []tailcfg.Endpoint
@@ -126,10 +125,6 @@ type Options struct {
// Status is called when there's a change in status.
Status func(Status)
// KeepSharerAndUserSplit controls whether the client
// understands Node.Sharer. If false, the Sharer is mapped to the User.
KeepSharerAndUserSplit bool
// SkipIPForwardingCheck declares that the host's IP
// forwarding works and should not be double-checked by the
// controlclient package.
@@ -244,28 +239,27 @@ func NewDirect(opts Options) (*Direct, error) {
}
c := &Direct{
httpc: httpc,
getMachinePrivKey: opts.GetMachinePrivateKey,
serverURL: opts.ServerURL,
clock: opts.Clock,
logf: opts.Logf,
newDecompressor: opts.NewDecompressor,
keepAlive: opts.KeepAlive,
persist: opts.Persist.View(),
authKey: opts.AuthKey,
discoPubKey: opts.DiscoPublicKey,
debugFlags: opts.DebugFlags,
keepSharerAndUserSplit: opts.KeepSharerAndUserSplit,
netMon: opts.NetMon,
skipIPForwardingCheck: opts.SkipIPForwardingCheck,
pinger: opts.Pinger,
popBrowser: opts.PopBrowserURL,
onClientVersion: opts.OnClientVersion,
onControlTime: opts.OnControlTime,
c2nHandler: opts.C2NHandler,
dialer: opts.Dialer,
dnsCache: dnsCache,
dialPlan: opts.DialPlan,
httpc: httpc,
getMachinePrivKey: opts.GetMachinePrivateKey,
serverURL: opts.ServerURL,
clock: opts.Clock,
logf: opts.Logf,
newDecompressor: opts.NewDecompressor,
keepAlive: opts.KeepAlive,
persist: opts.Persist.View(),
authKey: opts.AuthKey,
discoPubKey: opts.DiscoPublicKey,
debugFlags: opts.DebugFlags,
netMon: opts.NetMon,
skipIPForwardingCheck: opts.SkipIPForwardingCheck,
pinger: opts.Pinger,
popBrowser: opts.PopBrowserURL,
onClientVersion: opts.OnClientVersion,
onControlTime: opts.OnControlTime,
c2nHandler: opts.C2NHandler,
dialer: opts.Dialer,
dnsCache: dnsCache,
dialPlan: opts.DialPlan,
}
if opts.Hostinfo == nil {
c.SetHostinfo(hostinfo.New())
@@ -444,7 +438,7 @@ func (c *Direct) doLogin(ctx context.Context, opt loginOpt) (mustRegen bool, new
authKey, isWrapped, wrappedSig, wrappedKey := decodeWrappedAuthkey(c.authKey, c.logf)
hi := c.hostInfoLocked()
backendLogID := hi.BackendLogID
expired := c.expiry != nil && !c.expiry.IsZero() && c.expiry.Before(c.clock.Now())
expired := !c.expiry.IsZero() && c.expiry.Before(c.clock.Now())
c.mu.Unlock()
machinePrivKey, err := c.getMachinePrivKey()
@@ -811,10 +805,10 @@ func (c *Direct) SendUpdate(ctx context.Context) error {
return c.sendMapRequest(ctx, false, nil)
}
// If we go more than pollTimeout without hearing from the server,
// If we go more than watchdogTimeout without hearing from the server,
// end the long poll. We should be receiving a keep alive ping
// every minute.
const pollTimeout = 120 * time.Second
const watchdogTimeout = 120 * time.Second
// sendMapRequest makes a /map request to download the network map, calling cb
// with each new netmap. If isStreaming, it will poll forever and only returns
@@ -961,40 +955,48 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
return nil
}
timeout, timeoutChannel := c.clock.NewTimer(pollTimeout)
timeoutReset := make(chan struct{})
pollDone := make(chan struct{})
defer close(pollDone)
go func() {
for {
select {
case <-pollDone:
vlogf("netmap: ending timeout goroutine")
return
case <-timeoutChannel:
c.logf("map response long-poll timed out!")
cancel()
return
case <-timeoutReset:
if !timeout.Stop() {
select {
case <-timeoutChannel:
case <-pollDone:
vlogf("netmap: ending timeout goroutine")
return
}
}
vlogf("netmap: reset timeout timer")
timeout.Reset(pollTimeout)
}
}
}()
var mapResIdx int // 0 for first message, then 1+ for deltas
sess := newMapSession(persist.PrivateNodeKey())
sess := newMapSession(persist.PrivateNodeKey(), nu)
defer sess.Close()
sess.cancel = cancel
sess.logf = c.logf
sess.vlogf = vlogf
sess.altClock = c.clock
sess.machinePubKey = machinePubKey
sess.keepSharerAndUserSplit = c.keepSharerAndUserSplit
sess.onDebug = c.handleDebugMessage
sess.onConciseNetMapSummary = func(summary string) {
// Occasionally print the netmap header.
// This is handy for debugging, and our logs processing
// pipeline depends on it. (TODO: Remove this dependency.)
now := c.clock.Now()
if now.Sub(c.lastPrintMap) < 5*time.Minute {
return
}
c.lastPrintMap = now
c.logf("[v1] new network map[%d]:\n%s", mapResIdx, summary)
}
sess.onSelfNodeChanged = func(nm *netmap.NetworkMap) {
c.mu.Lock()
defer c.mu.Unlock()
// If we are the ones who last updated persist, then we can update it
// again. Otherwise, we should not touch it. Also, it's only worth
// change it if the Node info changed.
if persist == c.persist {
newPersist := persist.AsStruct()
newPersist.NodeID = nm.SelfNode.StableID()
newPersist.UserProfile = nm.UserProfiles[nm.User()]
c.persist = newPersist.View()
persist = c.persist
}
c.expiry = nm.Expiry
}
sess.StartWatchdog()
// gotNonKeepAliveMessage is whether we've yet received a MapResponse message without
// KeepAlive set.
var gotNonKeepAliveMessage bool
// If allowStream, then the server will use an HTTP long poll to
// return incremental results. There is always one response right
@@ -1003,8 +1005,8 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
// the same format before just closing the connection.
// We can use this same read loop either way.
var msg []byte
for i := 0; i == 0 || isStreaming; i++ {
vlogf("netmap: starting size read after %v (poll %v)", time.Since(t0).Round(time.Millisecond), i)
for ; mapResIdx == 0 || isStreaming; mapResIdx++ {
vlogf("netmap: starting size read after %v (poll %v)", time.Since(t0).Round(time.Millisecond), mapResIdx)
var siz [4]byte
if _, err := io.ReadFull(res.Body, siz[:]); err != nil {
vlogf("netmap: size read error after %v: %v", time.Since(t0).Round(time.Millisecond), err)
@@ -1068,7 +1070,7 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
}
select {
case timeoutReset <- struct{}{}:
case sess.watchdogReset <- struct{}{}:
vlogf("netmap: sent timer reset")
case <-ctx.Done():
c.logf("[v1] netmap: not resetting timer; context done: %v", ctx.Err())
@@ -1080,80 +1082,19 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
}
metricMapResponseMap.Add(1)
if i > 0 {
if gotNonKeepAliveMessage {
// If we've already seen a non-keep-alive message, this is a delta update.
metricMapResponseMapDelta.Add(1)
} else if resp.Node == nil {
// The very first non-keep-alive message should have Node populated.
c.logf("initial MapResponse lacked Node")
return errors.New("initial MapResponse lacked node")
}
gotNonKeepAliveMessage = true
hasDebug := resp.Debug != nil
// being conservative here, if Debug not present set to False
controlknobs.SetDisableUPnP(hasDebug && resp.Debug.DisableUPnP.EqualBool(true))
if hasDebug {
if code := resp.Debug.Exit; code != nil {
c.logf("exiting process with status %v per controlplane", *code)
os.Exit(*code)
}
if resp.Debug.DisableLogTail {
logtail.Disable()
envknob.SetNoLogsNoSupport()
}
if resp.Debug.LogHeapPprof {
go logheap.LogHeap(resp.Debug.LogHeapURL)
}
if resp.Debug.GoroutineDumpURL != "" {
go dumpGoroutinesToURL(c.httpc, resp.Debug.GoroutineDumpURL)
}
if sleep := time.Duration(resp.Debug.SleepSeconds * float64(time.Second)); sleep > 0 {
if err := sleepAsRequested(ctx, c.logf, timeoutReset, sleep, c.clock); err != nil {
return err
}
}
if err := sess.HandleNonKeepAliveMapResponse(ctx, &resp); err != nil {
return err
}
nm := sess.netmapForResponse(&resp)
// Occasionally print the netmap header.
// This is handy for debugging, and our logs processing
// pipeline depends on it. (TODO: Remove this dependency.)
// Code elsewhere prints netmap diffs every time they are received.
now := c.clock.Now()
if now.Sub(c.lastPrintMap) >= 5*time.Minute {
c.lastPrintMap = now
c.logf("[v1] new network map[%d]:\n%s", i, nm.VeryConcise())
}
if nm.SelfNode == nil {
c.logf("MapResponse lacked node")
return errors.New("MapResponse lacked node")
}
if d := nm.Debug; d != nil {
controlUseDERPRoute.Store(d.DERPRoute)
controlTrimWGConfig.Store(d.TrimWGConfig)
}
if DevKnob.StripEndpoints() {
for _, p := range resp.Peers {
p.Endpoints = nil
}
}
if DevKnob.StripCaps() {
nm.SelfNode.Capabilities = nil
}
newPersist := persist.AsStruct()
newPersist.NodeID = nm.SelfNode.StableID
newPersist.UserProfile = nm.UserProfiles[nm.User]
c.mu.Lock()
// If we are the ones who last updated persist, then we can update it
// again. Otherwise, we should not touch it.
if persist == c.persist {
c.persist = newPersist.View()
persist = c.persist
}
c.expiry = &nm.Expiry
c.mu.Unlock()
nu.UpdateFullNetmap(nm)
}
if ctx.Err() != nil {
return ctx.Err()
@@ -1161,6 +1102,45 @@ func (c *Direct) sendMapRequest(ctx context.Context, isStreaming bool, nu Netmap
return nil
}
func (c *Direct) handleDebugMessage(ctx context.Context, debug *tailcfg.Debug, watchdogReset chan<- struct{}) error {
if code := debug.Exit; code != nil {
c.logf("exiting process with status %v per controlplane", *code)
os.Exit(*code)
}
if debug.DisableLogTail {
logtail.Disable()
envknob.SetNoLogsNoSupport()
}
if sleep := time.Duration(debug.SleepSeconds * float64(time.Second)); sleep > 0 {
if err := sleepAsRequested(ctx, c.logf, watchdogReset, sleep, c.clock); err != nil {
return err
}
}
return nil
}
// initDisplayNames mutates any tailcfg.Nodes in resp to populate their display names,
// calling InitDisplayNames on each.
//
// The magicDNSSuffix used is based on selfNode.
func initDisplayNames(selfNode tailcfg.NodeView, resp *tailcfg.MapResponse) {
if resp.Node == nil && len(resp.Peers) == 0 && len(resp.PeersChanged) == 0 {
// Fast path for a common case (delta updates). No need to compute
// magicDNSSuffix.
return
}
magicDNSSuffix := netmap.MagicDNSSuffixOfNodeName(selfNode.Name())
if resp.Node != nil {
resp.Node.InitDisplayNames(magicDNSSuffix)
}
for _, n := range resp.Peers {
n.InitDisplayNames(magicDNSSuffix)
}
for _, n := range resp.PeersChanged {
n.InitDisplayNames(magicDNSSuffix)
}
}
// decode JSON decodes the res.Body into v. If serverNoiseKey is not specified,
// it uses the serverKey and mkey to decode the message from the NaCl-crypto-box.
func decode(res *http.Response, v any, serverKey, serverNoiseKey key.MachinePublic, mkey key.MachinePrivate) error {
@@ -1323,22 +1303,66 @@ func initDevKnob() devKnobs {
var clock tstime.Clock = tstime.StdClock{}
// opt.Bool configs from control.
// config from control.
var (
controlUseDERPRoute syncs.AtomicValue[opt.Bool]
controlTrimWGConfig syncs.AtomicValue[opt.Bool]
controlDisableDRPO atomic.Bool
controlKeepFullWGConfig atomic.Bool
controlRandomizeClientPort atomic.Bool
controlOneCGNAT syncs.AtomicValue[opt.Bool]
)
// DERPRouteFlag reports the last reported value from control for whether
// DERP route optimization (Issue 150) should be enabled.
func DERPRouteFlag() opt.Bool {
return controlUseDERPRoute.Load()
// DisableDRPO reports whether control says to disable the
// DERP route optimization (Issue 150).
func DisableDRPO() bool {
return controlDisableDRPO.Load()
}
// TrimWGConfig reports the last reported value from control for whether
// we should do lazy wireguard configuration.
func TrimWGConfig() opt.Bool {
return controlTrimWGConfig.Load()
// KeepFullWGConfig reports whether control says we should disable the lazy
// wireguard programming and instead give it the full netmap always.
func KeepFullWGConfig() bool {
return controlKeepFullWGConfig.Load()
}
// RandomizeClientPort reports whether control says we should randomize
// the client port.
func RandomizeClientPort() bool {
return controlRandomizeClientPort.Load()
}
// ControlOneCGNATSetting returns control's OneCGNAT setting, if any.
func ControlOneCGNATSetting() opt.Bool {
return controlOneCGNAT.Load()
}
func setControlKnobsFromNodeAttrs(selfNodeAttrs []string) {
var (
keepFullWG bool
disableDRPO bool
disableUPnP bool
randomizeClientPort bool
oneCGNAT opt.Bool
)
for _, attr := range selfNodeAttrs {
switch attr {
case tailcfg.NodeAttrDebugDisableWGTrim:
keepFullWG = true
case tailcfg.NodeAttrDebugDisableDRPO:
disableDRPO = true
case tailcfg.NodeAttrDisableUPnP:
disableUPnP = true
case tailcfg.NodeAttrRandomizeClientPort:
randomizeClientPort = true
case tailcfg.NodeAttrOneCGNATEnable:
oneCGNAT.Set(true)
case tailcfg.NodeAttrOneCGNATDisable:
oneCGNAT.Set(false)
}
}
controlKeepFullWGConfig.Store(keepFullWG)
controlDisableDRPO.Store(disableDRPO)
controlknobs.SetDisableUPnP(disableUPnP)
controlRandomizeClientPort.Store(randomizeClientPort)
controlOneCGNAT.Store(oneCGNAT)
}
// ipForwardingBroken reports whether the system's IP forwarding is disabled
@@ -1483,7 +1507,11 @@ func answerC2NPing(logf logger.Logf, c2nHandler http.Handler, c *http.Client, pr
}
}
func sleepAsRequested(ctx context.Context, logf logger.Logf, timeoutReset chan<- struct{}, d time.Duration, clock tstime.Clock) error {
// sleepAsRequest implements the sleep for a tailcfg.Debug message requesting
// that the client sleep. The complication is that while we're sleeping (if for
// a long time), we need to periodically reset the watchdog timer before it
// expires.
func sleepAsRequested(ctx context.Context, logf logger.Logf, watchdogReset chan<- struct{}, d time.Duration, clock tstime.Clock) error {
const maxSleep = 5 * time.Minute
if d > maxSleep {
logf("sleeping for %v, capped from server-requested %v ...", maxSleep, d)
@@ -1492,7 +1520,7 @@ func sleepAsRequested(ctx context.Context, logf logger.Logf, timeoutReset chan<-
logf("sleeping for server-requested %v ...", d)
}
ticker, tickerChannel := clock.NewTicker(pollTimeout / 2)
ticker, tickerChannel := clock.NewTicker(watchdogTimeout / 2)
defer ticker.Stop()
timer, timerChannel := clock.NewTimer(d)
defer timer.Stop()
@@ -1504,7 +1532,7 @@ func sleepAsRequested(ctx context.Context, logf logger.Logf, timeoutReset chan<-
return nil
case <-tickerChannel:
select {
case timeoutReset <- struct{}{}:
case watchdogReset <- struct{}{}:
case <-timerChannel:
return nil
case <-ctx.Done():

View File

@@ -4,18 +4,20 @@
package controlclient
import (
"context"
"fmt"
"log"
"net/netip"
"sort"
"tailscale.com/envknob"
"tailscale.com/tailcfg"
"tailscale.com/tstime"
"tailscale.com/types/key"
"tailscale.com/types/logger"
"tailscale.com/types/netmap"
"tailscale.com/types/opt"
"tailscale.com/types/ptr"
"tailscale.com/types/views"
"tailscale.com/util/cmpx"
"tailscale.com/wgengine/filter"
)
@@ -29,14 +31,41 @@ import (
// one MapRequest).
type mapSession struct {
// Immutable fields.
privateNodeKey key.NodePrivate
logf logger.Logf
vlogf logger.Logf
machinePubKey key.MachinePublic
keepSharerAndUserSplit bool // see Options.KeepSharerAndUserSplit
nu NetmapUpdater // called on changes (in addition to the optional hooks below)
privateNodeKey key.NodePrivate
publicNodeKey key.NodePublic
logf logger.Logf
vlogf logger.Logf
machinePubKey key.MachinePublic
altClock tstime.Clock // if nil, regular time is used
cancel context.CancelFunc // always non-nil, shuts down caller's base long poll context
watchdogReset chan struct{} // send to request that the long poll activity watchdog timeout be reset
// sessionAliveCtx is a Background-based context that's alive for the
// duration of the mapSession that we own the lifetime of. It's closed by
// sessionAliveCtxClose.
sessionAliveCtx context.Context
sessionAliveCtxClose context.CancelFunc // closes sessionAliveCtx
// Optional hooks, set once before use.
// onDebug specifies what to do with a *tailcfg.Debug message.
// If the watchdogReset chan is nil, it's not used. Otherwise it can be sent to
// to request that the long poll activity watchdog timeout be reset.
onDebug func(_ context.Context, _ *tailcfg.Debug, watchdogReset chan<- struct{}) error
// onConciseNetMapSummary, if non-nil, is called with the Netmap.VeryConcise summary
// whenever a map response is received.
onConciseNetMapSummary func(string)
// onSelfNodeChanged is called before the NetmapUpdater if the self node was
// changed.
onSelfNodeChanged func(*netmap.NetworkMap)
// Fields storing state over the course of multiple MapResponses.
lastNode *tailcfg.Node
lastNode tailcfg.NodeView
peers map[tailcfg.NodeID]*tailcfg.NodeView // pointer to view (oddly). same pointers as sortedPeers.
sortedPeers []*tailcfg.NodeView // same pointers as peers, but sorted by Node.ID
lastDNSConfig *tailcfg.DNSConfig
lastDERPMap *tailcfg.DERPMap
lastUserProfile map[tailcfg.UserID]tailcfg.UserProfile
@@ -44,51 +73,154 @@ type mapSession struct {
lastParsedPacketFilter []filter.Match
lastSSHPolicy *tailcfg.SSHPolicy
collectServices bool
previousPeers []*tailcfg.Node // for delta-purposes
lastDomain string
lastDomainAuditLogID string
lastHealth []string
lastPopBrowserURL string
stickyDebug tailcfg.Debug // accumulated opt.Bool values
lastTKAInfo *tailcfg.TKAInfo
// netMapBuilding is non-nil during a netmapForResponse call,
// containing the value to be returned, once fully populated.
netMapBuilding *netmap.NetworkMap
lastNetmapSummary string // from NetworkMap.VeryConcise
}
func newMapSession(privateNodeKey key.NodePrivate) *mapSession {
// newMapSession returns a mostly unconfigured new mapSession.
//
// Modify its optional fields on the returned value before use.
//
// It must have its Close method called to release resources.
func newMapSession(privateNodeKey key.NodePrivate, nu NetmapUpdater) *mapSession {
ms := &mapSession{
nu: nu,
privateNodeKey: privateNodeKey,
logf: logger.Discard,
vlogf: logger.Discard,
publicNodeKey: privateNodeKey.Public(),
lastDNSConfig: new(tailcfg.DNSConfig),
lastUserProfile: map[tailcfg.UserID]tailcfg.UserProfile{},
watchdogReset: make(chan struct{}),
// Non-nil no-op defaults, to be optionally overridden by the caller.
logf: logger.Discard,
vlogf: logger.Discard,
cancel: func() {},
onDebug: func(context.Context, *tailcfg.Debug, chan<- struct{}) error { return nil },
onConciseNetMapSummary: func(string) {},
onSelfNodeChanged: func(*netmap.NetworkMap) {},
}
ms.sessionAliveCtx, ms.sessionAliveCtxClose = context.WithCancel(context.Background())
return ms
}
func (ms *mapSession) addUserProfile(userID tailcfg.UserID) {
nm := ms.netMapBuilding
if _, dup := nm.UserProfiles[userID]; dup {
// Already populated it from a previous peer.
return
}
if up, ok := ms.lastUserProfile[userID]; ok {
nm.UserProfiles[userID] = up
}
func (ms *mapSession) clock() tstime.Clock {
return cmpx.Or[tstime.Clock](ms.altClock, tstime.StdClock{})
}
// netmapForResponse returns a fully populated NetworkMap from a full
// or incremental MapResponse within the session, filling in omitted
// information from prior MapResponse values.
func (ms *mapSession) netmapForResponse(resp *tailcfg.MapResponse) *netmap.NetworkMap {
undeltaPeers(resp, ms.previousPeers)
// StartWatchdog starts the session's watchdog timer.
// If there's no activity in too long, it tears down the connection.
// Call Close to release these resources.
func (ms *mapSession) StartWatchdog() {
timer, timedOutChan := ms.clock().NewTimer(watchdogTimeout)
go func() {
defer timer.Stop()
for {
select {
case <-ms.sessionAliveCtx.Done():
ms.vlogf("netmap: ending timeout goroutine")
return
case <-timedOutChan:
ms.logf("map response long-poll timed out!")
ms.cancel()
return
case <-ms.watchdogReset:
if !timer.Stop() {
select {
case <-timedOutChan:
case <-ms.sessionAliveCtx.Done():
ms.vlogf("netmap: ending timeout goroutine")
return
}
}
ms.vlogf("netmap: reset timeout timer")
timer.Reset(watchdogTimeout)
}
}
}()
}
func (ms *mapSession) Close() {
ms.sessionAliveCtxClose()
}
// HandleNonKeepAliveMapResponse handles a non-KeepAlive MapResponse (full or
// incremental).
//
// All fields that are valid on a KeepAlive MapResponse have already been
// handled.
//
// TODO(bradfitz): make this handle all fields later. For now (2023-08-20) this
// is [re]factoring progress enough.
func (ms *mapSession) HandleNonKeepAliveMapResponse(ctx context.Context, resp *tailcfg.MapResponse) error {
if debug := resp.Debug; debug != nil {
if err := ms.onDebug(ctx, debug, ms.watchdogReset); err != nil {
return err
}
}
if DevKnob.StripEndpoints() {
for _, p := range resp.Peers {
p.Endpoints = nil
}
for _, p := range resp.PeersChanged {
p.Endpoints = nil
}
}
// For responses that mutate the self node, check for updated nodeAttrs.
if resp.Node != nil {
if DevKnob.StripCaps() {
resp.Node.Capabilities = nil
}
setControlKnobsFromNodeAttrs(resp.Node.Capabilities)
}
// Call Node.InitDisplayNames on any changed nodes.
initDisplayNames(cmpx.Or(resp.Node.View(), ms.lastNode), resp)
ms.updateStateFromResponse(resp)
nm := ms.netmap()
ms.lastNetmapSummary = nm.VeryConcise()
ms.onConciseNetMapSummary(ms.lastNetmapSummary)
// If the self node changed, we might need to update persist.
if resp.Node != nil {
ms.onSelfNodeChanged(nm)
}
ms.nu.UpdateFullNetmap(nm)
return nil
}
// updateStats are some stats from updateStateFromResponse, primarily for
// testing. It's meant to be cheap enough to always compute, though. It doesn't
// allocate.
type updateStats struct {
allNew bool
added int
removed int
changed int
}
// updateStateFromResponse updates ms from res. It takes ownership of res.
func (ms *mapSession) updateStateFromResponse(resp *tailcfg.MapResponse) {
ms.updatePeersStateFromResponse(resp)
if resp.Node != nil {
ms.lastNode = resp.Node.View()
}
ms.previousPeers = cloneNodes(resp.Peers) // defensive/lazy clone, since this escapes to who knows where
for _, up := range resp.UserProfiles {
ms.lastUserProfile[up.ID] = up
}
// TODO(bradfitz): clean up old user profiles? maybe not worth it.
if dm := resp.DERPMap; dm != nil {
ms.vlogf("netmap: new map contains DERP map")
@@ -144,34 +276,172 @@ func (ms *mapSession) netmapForResponse(resp *tailcfg.MapResponse) *netmap.Netwo
if resp.TKAInfo != nil {
ms.lastTKAInfo = resp.TKAInfo
}
}
debug := resp.Debug
if debug != nil {
if debug.RandomizeClientPort {
debug.SetRandomizeClientPort.Set(true)
// updatePeersStateFromResponseres updates ms.peers and ms.sortedPeers from res. It takes ownership of res.
func (ms *mapSession) updatePeersStateFromResponse(resp *tailcfg.MapResponse) (stats updateStats) {
defer func() {
if stats.removed > 0 || stats.added > 0 {
ms.rebuildSorted()
}
if debug.ForceBackgroundSTUN {
debug.SetForceBackgroundSTUN.Set(true)
}
copyDebugOptBools(&ms.stickyDebug, debug)
} else if ms.stickyDebug != (tailcfg.Debug{}) {
debug = new(tailcfg.Debug)
}()
if ms.peers == nil {
ms.peers = make(map[tailcfg.NodeID]*tailcfg.NodeView)
}
if debug != nil {
copyDebugOptBools(debug, &ms.stickyDebug)
if !debug.ForceBackgroundSTUN {
debug.ForceBackgroundSTUN, _ = ms.stickyDebug.SetForceBackgroundSTUN.Get()
if len(resp.Peers) > 0 {
// Not delta encoded.
stats.allNew = true
keep := make(map[tailcfg.NodeID]bool, len(resp.Peers))
for _, n := range resp.Peers {
keep[n.ID] = true
if vp, ok := ms.peers[n.ID]; ok {
stats.changed++
*vp = n.View()
} else {
stats.added++
ms.peers[n.ID] = ptr.To(n.View())
}
}
if !debug.RandomizeClientPort {
debug.RandomizeClientPort, _ = ms.stickyDebug.SetRandomizeClientPort.Get()
for id := range ms.peers {
if !keep[id] {
stats.removed++
delete(ms.peers, id)
}
}
// Peers precludes all other delta operations so just return.
return
}
for _, id := range resp.PeersRemoved {
if _, ok := ms.peers[id]; ok {
delete(ms.peers, id)
stats.removed++
}
}
for _, n := range resp.PeersChanged {
if vp, ok := ms.peers[n.ID]; ok {
stats.changed++
*vp = n.View()
} else {
stats.added++
ms.peers[n.ID] = ptr.To(n.View())
}
}
for nodeID, seen := range resp.PeerSeenChange {
if vp, ok := ms.peers[nodeID]; ok {
mut := vp.AsStruct()
if seen {
mut.LastSeen = ptr.To(clock.Now())
} else {
mut.LastSeen = nil
}
*vp = mut.View()
stats.changed++
}
}
for nodeID, online := range resp.OnlineChange {
if vp, ok := ms.peers[nodeID]; ok {
mut := vp.AsStruct()
mut.Online = ptr.To(online)
*vp = mut.View()
stats.changed++
}
}
for _, pc := range resp.PeersChangedPatch {
vp, ok := ms.peers[pc.NodeID]
if !ok {
continue
}
stats.changed++
mut := vp.AsStruct()
if pc.DERPRegion != 0 {
mut.DERP = fmt.Sprintf("%s:%v", tailcfg.DerpMagicIP, pc.DERPRegion)
}
if pc.Cap != 0 {
mut.Cap = pc.Cap
}
if pc.Endpoints != nil {
mut.Endpoints = pc.Endpoints
}
if pc.Key != nil {
mut.Key = *pc.Key
}
if pc.DiscoKey != nil {
mut.DiscoKey = *pc.DiscoKey
}
if v := pc.Online; v != nil {
mut.Online = ptr.To(*v)
}
if v := pc.LastSeen; v != nil {
mut.LastSeen = ptr.To(*v)
}
if v := pc.KeyExpiry; v != nil {
mut.KeyExpiry = *v
}
if v := pc.Capabilities; v != nil {
mut.Capabilities = *v
}
if v := pc.KeySignature; v != nil {
mut.KeySignature = v
}
*vp = mut.View()
}
return
}
// rebuildSorted rebuilds ms.sortedPeers from ms.peers. It should be called
// after any additions or removals from peers.
func (ms *mapSession) rebuildSorted() {
if ms.sortedPeers == nil {
ms.sortedPeers = make([]*tailcfg.NodeView, 0, len(ms.peers))
} else {
if len(ms.sortedPeers) > len(ms.peers) {
clear(ms.sortedPeers[len(ms.peers):])
}
ms.sortedPeers = ms.sortedPeers[:0]
}
for _, p := range ms.peers {
ms.sortedPeers = append(ms.sortedPeers, p)
}
sort.Slice(ms.sortedPeers, func(i, j int) bool {
return ms.sortedPeers[i].ID() < ms.sortedPeers[j].ID()
})
}
func (ms *mapSession) addUserProfile(nm *netmap.NetworkMap, userID tailcfg.UserID) {
if userID == 0 {
return
}
if _, dup := nm.UserProfiles[userID]; dup {
// Already populated it from a previous peer.
return
}
if up, ok := ms.lastUserProfile[userID]; ok {
nm.UserProfiles[userID] = up
}
}
// netmap returns a fully populated NetworkMap from the last state seen from
// a call to updateStateFromResponse, filling in omitted
// information from prior MapResponse values.
func (ms *mapSession) netmap() *netmap.NetworkMap {
peerViews := make([]tailcfg.NodeView, len(ms.sortedPeers))
for i, vp := range ms.sortedPeers {
peerViews[i] = *vp
}
nm := &netmap.NetworkMap{
NodeKey: ms.privateNodeKey.Public(),
NodeKey: ms.publicNodeKey,
PrivateKey: ms.privateNodeKey,
MachineKey: ms.machinePubKey,
Peers: resp.Peers,
Peers: peerViews,
UserProfiles: make(map[tailcfg.UserID]tailcfg.UserProfile),
Domain: ms.lastDomain,
DomainAuditLogID: ms.lastDomainAuditLogID,
@@ -181,11 +451,9 @@ func (ms *mapSession) netmapForResponse(resp *tailcfg.MapResponse) *netmap.Netwo
SSHPolicy: ms.lastSSHPolicy,
CollectServices: ms.collectServices,
DERPMap: ms.lastDERPMap,
Debug: debug,
ControlHealth: ms.lastHealth,
TKAEnabled: ms.lastTKAInfo != nil && !ms.lastTKAInfo.Disabled,
}
ms.netMapBuilding = nm
if ms.lastTKAInfo != nil && ms.lastTKAInfo.Head != "" {
if err := nm.TKAHead.UnmarshalText([]byte(ms.lastTKAInfo.Head)); err != nil {
@@ -194,186 +462,29 @@ func (ms *mapSession) netmapForResponse(resp *tailcfg.MapResponse) *netmap.Netwo
}
}
if resp.Node != nil {
ms.lastNode = resp.Node
}
if node := ms.lastNode.Clone(); node != nil {
if node := ms.lastNode; node.Valid() {
nm.SelfNode = node
nm.Expiry = node.KeyExpiry
nm.Name = node.Name
nm.Addresses = filterSelfAddresses(node.Addresses)
nm.User = node.User
if node.Hostinfo.Valid() {
nm.Hostinfo = *node.Hostinfo.AsStruct()
}
if node.MachineAuthorized {
nm.Expiry = node.KeyExpiry()
nm.Name = node.Name()
nm.Addresses = filterSelfAddresses(node.Addresses().AsSlice())
if node.MachineAuthorized() {
nm.MachineStatus = tailcfg.MachineAuthorized
} else {
nm.MachineStatus = tailcfg.MachineUnauthorized
}
}
ms.addUserProfile(nm.User)
magicDNSSuffix := nm.MagicDNSSuffix()
if nm.SelfNode != nil {
nm.SelfNode.InitDisplayNames(magicDNSSuffix)
}
for _, peer := range resp.Peers {
peer.InitDisplayNames(magicDNSSuffix)
if !peer.Sharer.IsZero() {
if ms.keepSharerAndUserSplit {
ms.addUserProfile(peer.Sharer)
} else {
peer.User = peer.Sharer
}
}
ms.addUserProfile(peer.User)
ms.addUserProfile(nm, nm.User())
for _, peer := range peerViews {
ms.addUserProfile(nm, peer.Sharer())
ms.addUserProfile(nm, peer.User())
}
if DevKnob.ForceProxyDNS() {
nm.DNS.Proxied = true
}
ms.netMapBuilding = nil
return nm
}
// undeltaPeers updates mapRes.Peers to be complete based on the
// provided previous peer list and the PeersRemoved and PeersChanged
// fields in mapRes, as well as the PeerSeenChange and OnlineChange
// maps.
//
// It then also nils out the delta fields.
func undeltaPeers(mapRes *tailcfg.MapResponse, prev []*tailcfg.Node) {
if len(mapRes.Peers) > 0 {
// Not delta encoded.
if !nodesSorted(mapRes.Peers) {
log.Printf("netmap: undeltaPeers: MapResponse.Peers not sorted; sorting")
sortNodes(mapRes.Peers)
}
return
}
var removed map[tailcfg.NodeID]bool
if pr := mapRes.PeersRemoved; len(pr) > 0 {
removed = make(map[tailcfg.NodeID]bool, len(pr))
for _, id := range pr {
removed[id] = true
}
}
changed := mapRes.PeersChanged
if !nodesSorted(changed) {
log.Printf("netmap: undeltaPeers: MapResponse.PeersChanged not sorted; sorting")
sortNodes(changed)
}
if !nodesSorted(prev) {
// Internal error (unrelated to the network) if we get here.
log.Printf("netmap: undeltaPeers: [unexpected] prev not sorted; sorting")
sortNodes(prev)
}
newFull := prev
if len(removed) > 0 || len(changed) > 0 {
newFull = make([]*tailcfg.Node, 0, len(prev)-len(removed))
for len(prev) > 0 && len(changed) > 0 {
pID := prev[0].ID
cID := changed[0].ID
if removed[pID] {
prev = prev[1:]
continue
}
switch {
case pID < cID:
newFull = append(newFull, prev[0])
prev = prev[1:]
case pID == cID:
newFull = append(newFull, changed[0])
prev, changed = prev[1:], changed[1:]
case cID < pID:
newFull = append(newFull, changed[0])
changed = changed[1:]
}
}
newFull = append(newFull, changed...)
for _, n := range prev {
if !removed[n.ID] {
newFull = append(newFull, n)
}
}
sortNodes(newFull)
}
if len(mapRes.PeerSeenChange) != 0 || len(mapRes.OnlineChange) != 0 || len(mapRes.PeersChangedPatch) != 0 {
peerByID := make(map[tailcfg.NodeID]*tailcfg.Node, len(newFull))
for _, n := range newFull {
peerByID[n.ID] = n
}
now := clock.Now()
for nodeID, seen := range mapRes.PeerSeenChange {
if n, ok := peerByID[nodeID]; ok {
if seen {
n.LastSeen = &now
} else {
n.LastSeen = nil
}
}
}
for nodeID, online := range mapRes.OnlineChange {
if n, ok := peerByID[nodeID]; ok {
online := online
n.Online = &online
}
}
for _, ec := range mapRes.PeersChangedPatch {
if n, ok := peerByID[ec.NodeID]; ok {
if ec.DERPRegion != 0 {
n.DERP = fmt.Sprintf("%s:%v", tailcfg.DerpMagicIP, ec.DERPRegion)
}
if ec.Cap != 0 {
n.Cap = ec.Cap
}
if ec.Endpoints != nil {
n.Endpoints = ec.Endpoints
}
if ec.Key != nil {
n.Key = *ec.Key
}
if ec.DiscoKey != nil {
n.DiscoKey = *ec.DiscoKey
}
if v := ec.Online; v != nil {
n.Online = ptrCopy(v)
}
if v := ec.LastSeen; v != nil {
n.LastSeen = ptrCopy(v)
}
if v := ec.KeyExpiry; v != nil {
n.KeyExpiry = *v
}
if v := ec.Capabilities; v != nil {
n.Capabilities = *v
}
if v := ec.KeySignature; v != nil {
n.KeySignature = v
}
}
}
}
mapRes.Peers = newFull
mapRes.PeersChanged = nil
mapRes.PeersRemoved = nil
}
// ptrCopy returns a pointer to a newly allocated shallow copy of *v.
func ptrCopy[T any](v *T) *T {
if v == nil {
return nil
}
ret := new(T)
*ret = *v
return ret
}
func nodesSorted(v []*tailcfg.Node) bool {
for i, n := range v {
if i > 0 && n.ID <= v[i-1].ID {
@@ -413,18 +524,3 @@ func filterSelfAddresses(in []netip.Prefix) (ret []netip.Prefix) {
return ret
}
}
func copyDebugOptBools(dst, src *tailcfg.Debug) {
copy := func(v *opt.Bool, s opt.Bool) {
if s != "" {
*v = s
}
}
copy(&dst.DERPRoute, src.DERPRoute)
copy(&dst.DisableSubnetsIfPAC, src.DisableSubnetsIfPAC)
copy(&dst.DisableUPnP, src.DisableUPnP)
copy(&dst.OneCGNATRoute, src.OneCGNATRoute)
copy(&dst.SetForceBackgroundSTUN, src.SetForceBackgroundSTUN)
copy(&dst.SetRandomizeClientPort, src.SetRandomizeClientPort)
copy(&dst.TrimWGConfig, src.TrimWGConfig)
}

View File

@@ -4,10 +4,13 @@
package controlclient
import (
"context"
"encoding/json"
"fmt"
"net/netip"
"reflect"
"strings"
"sync/atomic"
"testing"
"time"
@@ -17,12 +20,12 @@ import (
"tailscale.com/tstime"
"tailscale.com/types/key"
"tailscale.com/types/netmap"
"tailscale.com/types/opt"
"tailscale.com/types/ptr"
"tailscale.com/util/mak"
"tailscale.com/util/must"
)
func TestUndeltaPeers(t *testing.T) {
func TestUpdatePeersStateFromResponse(t *testing.T) {
var curTime time.Time
online := func(v bool) func(*tailcfg.Node) {
@@ -54,11 +57,12 @@ func TestUndeltaPeers(t *testing.T) {
}
peers := func(nv ...*tailcfg.Node) []*tailcfg.Node { return nv }
tests := []struct {
name string
mapRes *tailcfg.MapResponse
curTime time.Time
prev []*tailcfg.Node
want []*tailcfg.Node
name string
mapRes *tailcfg.MapResponse
curTime time.Time
prev []*tailcfg.Node
want []*tailcfg.Node
wantStats updateStats
}{
{
name: "full_peers",
@@ -66,6 +70,10 @@ func TestUndeltaPeers(t *testing.T) {
Peers: peers(n(1, "foo"), n(2, "bar")),
},
want: peers(n(1, "foo"), n(2, "bar")),
wantStats: updateStats{
allNew: true,
added: 2,
},
},
{
name: "full_peers_ignores_deltas",
@@ -74,6 +82,10 @@ func TestUndeltaPeers(t *testing.T) {
PeersRemoved: []tailcfg.NodeID{2},
},
want: peers(n(1, "foo"), n(2, "bar")),
wantStats: updateStats{
allNew: true,
added: 2,
},
},
{
name: "add_and_update",
@@ -82,14 +94,21 @@ func TestUndeltaPeers(t *testing.T) {
PeersChanged: peers(n(0, "zero"), n(2, "bar2"), n(3, "three")),
},
want: peers(n(0, "zero"), n(1, "foo"), n(2, "bar2"), n(3, "three")),
wantStats: updateStats{
added: 2, // added IDs 0 and 3
changed: 1, // changed ID 2
},
},
{
name: "remove",
prev: peers(n(1, "foo"), n(2, "bar")),
mapRes: &tailcfg.MapResponse{
PeersRemoved: []tailcfg.NodeID{1},
PeersRemoved: []tailcfg.NodeID{1, 3, 4},
},
want: peers(n(2, "bar")),
wantStats: updateStats{
removed: 1, // ID 1
},
},
{
name: "add_and_remove",
@@ -99,6 +118,10 @@ func TestUndeltaPeers(t *testing.T) {
PeersRemoved: []tailcfg.NodeID{2},
},
want: peers(n(1, "foo2")),
wantStats: updateStats{
changed: 1,
removed: 1,
},
},
{
name: "unchanged",
@@ -111,13 +134,15 @@ func TestUndeltaPeers(t *testing.T) {
prev: peers(n(1, "foo"), n(2, "bar")),
mapRes: &tailcfg.MapResponse{
OnlineChange: map[tailcfg.NodeID]bool{
1: true,
1: true,
404: true,
},
},
want: peers(
n(1, "foo", online(true)),
n(2, "bar"),
),
wantStats: updateStats{changed: 1},
},
{
name: "online_change_offline",
@@ -132,6 +157,7 @@ func TestUndeltaPeers(t *testing.T) {
n(1, "foo", online(false)),
n(2, "bar", online(true)),
),
wantStats: updateStats{changed: 2},
},
{
name: "peer_seen_at",
@@ -147,6 +173,7 @@ func TestUndeltaPeers(t *testing.T) {
n(1, "foo"),
n(2, "bar", seenAt(time.Unix(123, 0))),
),
wantStats: updateStats{changed: 2},
},
{
name: "ep_change_derp",
@@ -157,7 +184,8 @@ func TestUndeltaPeers(t *testing.T) {
DERPRegion: 4,
}},
},
want: peers(n(1, "foo", withDERP("127.3.3.40:4"))),
want: peers(n(1, "foo", withDERP("127.3.3.40:4"))),
wantStats: updateStats{changed: 1},
},
{
name: "ep_change_udp",
@@ -168,10 +196,11 @@ func TestUndeltaPeers(t *testing.T) {
Endpoints: []string{"1.2.3.4:56"},
}},
},
want: peers(n(1, "foo", withEP("1.2.3.4:56"))),
want: peers(n(1, "foo", withEP("1.2.3.4:56"))),
wantStats: updateStats{changed: 1},
},
{
name: "ep_change_udp",
name: "ep_change_udp_2",
prev: peers(n(1, "foo", withDERP("127.3.3.40:3"), withEP("1.2.3.4:111"))),
mapRes: &tailcfg.MapResponse{
PeersChangedPatch: []*tailcfg.PeerChange{{
@@ -179,7 +208,8 @@ func TestUndeltaPeers(t *testing.T) {
Endpoints: []string{"1.2.3.4:56"},
}},
},
want: peers(n(1, "foo", withDERP("127.3.3.40:3"), withEP("1.2.3.4:56"))),
want: peers(n(1, "foo", withDERP("127.3.3.40:3"), withEP("1.2.3.4:56"))),
wantStats: updateStats{changed: 1},
},
{
name: "ep_change_both",
@@ -191,7 +221,8 @@ func TestUndeltaPeers(t *testing.T) {
Endpoints: []string{"1.2.3.4:56"},
}},
},
want: peers(n(1, "foo", withDERP("127.3.3.40:2"), withEP("1.2.3.4:56"))),
want: peers(n(1, "foo", withDERP("127.3.3.40:2"), withEP("1.2.3.4:56"))),
wantStats: updateStats{changed: 1},
},
{
name: "change_key",
@@ -206,6 +237,7 @@ func TestUndeltaPeers(t *testing.T) {
Name: "foo",
Key: key.NodePublicFromRaw32(mem.B(append(make([]byte, 31), 'A'))),
}),
wantStats: updateStats{changed: 1},
},
{
name: "change_key_signature",
@@ -215,11 +247,13 @@ func TestUndeltaPeers(t *testing.T) {
NodeID: 1,
KeySignature: []byte{3, 4},
}},
}, want: peers(&tailcfg.Node{
},
want: peers(&tailcfg.Node{
ID: 1,
Name: "foo",
KeySignature: []byte{3, 4},
}),
wantStats: updateStats{changed: 1},
},
{
name: "change_disco_key",
@@ -229,11 +263,13 @@ func TestUndeltaPeers(t *testing.T) {
NodeID: 1,
DiscoKey: ptr.To(key.DiscoPublicFromRaw32(mem.B(append(make([]byte, 31), 'A')))),
}},
}, want: peers(&tailcfg.Node{
},
want: peers(&tailcfg.Node{
ID: 1,
Name: "foo",
DiscoKey: key.DiscoPublicFromRaw32(mem.B(append(make([]byte, 31), 'A'))),
}),
wantStats: updateStats{changed: 1},
},
{
name: "change_online",
@@ -243,11 +279,13 @@ func TestUndeltaPeers(t *testing.T) {
NodeID: 1,
Online: ptr.To(true),
}},
}, want: peers(&tailcfg.Node{
},
want: peers(&tailcfg.Node{
ID: 1,
Name: "foo",
Online: ptr.To(true),
}),
wantStats: updateStats{changed: 1},
},
{
name: "change_last_seen",
@@ -257,11 +295,13 @@ func TestUndeltaPeers(t *testing.T) {
NodeID: 1,
LastSeen: ptr.To(time.Unix(123, 0).UTC()),
}},
}, want: peers(&tailcfg.Node{
},
want: peers(&tailcfg.Node{
ID: 1,
Name: "foo",
LastSeen: ptr.To(time.Unix(123, 0).UTC()),
}),
wantStats: updateStats{changed: 1},
},
{
name: "change_key_expiry",
@@ -271,11 +311,13 @@ func TestUndeltaPeers(t *testing.T) {
NodeID: 1,
KeyExpiry: ptr.To(time.Unix(123, 0).UTC()),
}},
}, want: peers(&tailcfg.Node{
},
want: peers(&tailcfg.Node{
ID: 1,
Name: "foo",
KeyExpiry: time.Unix(123, 0).UTC(),
}),
wantStats: updateStats{changed: 1},
},
{
name: "change_capabilities",
@@ -285,11 +327,13 @@ func TestUndeltaPeers(t *testing.T) {
NodeID: 1,
Capabilities: ptr.To([]string{"foo"}),
}},
}, want: peers(&tailcfg.Node{
},
want: peers(&tailcfg.Node{
ID: 1,
Name: "foo",
Capabilities: []string{"foo"},
}),
wantStats: updateStats{changed: 1},
}}
for _, tt := range tests {
@@ -298,9 +342,23 @@ func TestUndeltaPeers(t *testing.T) {
curTime = tt.curTime
tstest.Replace(t, &clock, tstime.Clock(tstest.NewClock(tstest.ClockOpts{Start: curTime})))
}
undeltaPeers(tt.mapRes, tt.prev)
if !reflect.DeepEqual(tt.mapRes.Peers, tt.want) {
t.Errorf("wrong results\n got: %s\nwant: %s", formatNodes(tt.mapRes.Peers), formatNodes(tt.want))
ms := newTestMapSession(t, nil)
for _, n := range tt.prev {
mak.Set(&ms.peers, n.ID, ptr.To(n.View()))
}
ms.rebuildSorted()
gotStats := ms.updatePeersStateFromResponse(tt.mapRes)
got := make([]*tailcfg.Node, len(ms.sortedPeers))
for i, vp := range ms.sortedPeers {
got[i] = vp.AsStruct()
}
if gotStats != tt.wantStats {
t.Errorf("got stats = %+v; want %+v", gotStats, tt.wantStats)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("wrong results\n got: %s\nwant: %s", formatNodes(got), formatNodes(tt.want))
}
})
}
@@ -331,12 +389,18 @@ func formatNodes(nodes []*tailcfg.Node) string {
return sb.String()
}
func newTestMapSession(t *testing.T) *mapSession {
ms := newMapSession(key.NewNode())
func newTestMapSession(t testing.TB, nu NetmapUpdater) *mapSession {
ms := newMapSession(key.NewNode(), nu)
t.Cleanup(ms.Close)
ms.logf = t.Logf
return ms
}
func (ms *mapSession) netmapForResponse(res *tailcfg.MapResponse) *netmap.NetworkMap {
ms.updateStateFromResponse(res)
return ms.netmap()
}
func TestNetmapForResponse(t *testing.T) {
t.Run("implicit_packetfilter", func(t *testing.T) {
somePacketFilter := []tailcfg.FilterRule{
@@ -347,7 +411,7 @@ func TestNetmapForResponse(t *testing.T) {
},
},
}
ms := newTestMapSession(t)
ms := newTestMapSession(t, nil)
nm1 := ms.netmapForResponse(&tailcfg.MapResponse{
Node: new(tailcfg.Node),
PacketFilter: somePacketFilter,
@@ -368,7 +432,7 @@ func TestNetmapForResponse(t *testing.T) {
})
t.Run("implicit_dnsconfig", func(t *testing.T) {
someDNSConfig := &tailcfg.DNSConfig{Domains: []string{"foo", "bar"}}
ms := newTestMapSession(t)
ms := newTestMapSession(t, nil)
nm1 := ms.netmapForResponse(&tailcfg.MapResponse{
Node: new(tailcfg.Node),
DNSConfig: someDNSConfig,
@@ -385,7 +449,7 @@ func TestNetmapForResponse(t *testing.T) {
}
})
t.Run("collect_services", func(t *testing.T) {
ms := newTestMapSession(t)
ms := newTestMapSession(t, nil)
var nm *netmap.NetworkMap
wantCollect := func(v bool) {
t.Helper()
@@ -418,7 +482,7 @@ func TestNetmapForResponse(t *testing.T) {
wantCollect(true)
})
t.Run("implicit_domain", func(t *testing.T) {
ms := newTestMapSession(t)
ms := newTestMapSession(t, nil)
var nm *netmap.NetworkMap
want := func(v string) {
t.Helper()
@@ -441,17 +505,19 @@ func TestNetmapForResponse(t *testing.T) {
someNode := &tailcfg.Node{
Name: "foo",
}
wantNode := &tailcfg.Node{
wantNode := (&tailcfg.Node{
Name: "foo",
ComputedName: "foo",
ComputedNameWithHost: "foo",
}
ms := newTestMapSession(t)
nm1 := ms.netmapForResponse(&tailcfg.MapResponse{
}).View()
ms := newTestMapSession(t, nil)
mapRes := &tailcfg.MapResponse{
Node: someNode,
})
if nm1.SelfNode == nil {
}
initDisplayNames(mapRes.Node.View(), mapRes)
ms.updateStateFromResponse(mapRes)
nm1 := ms.netmap()
if !nm1.SelfNode.Valid() {
t.Fatal("nil Node in 1st netmap")
}
if !reflect.DeepEqual(nm1.SelfNode, wantNode) {
@@ -459,8 +525,9 @@ func TestNetmapForResponse(t *testing.T) {
t.Errorf("Node mismatch in 1st netmap; got: %s", j)
}
nm2 := ms.netmapForResponse(&tailcfg.MapResponse{})
if nm2.SelfNode == nil {
ms.updateStateFromResponse(&tailcfg.MapResponse{})
nm2 := ms.netmap()
if !nm2.SelfNode.Valid() {
t.Fatal("nil Node in 1st netmap")
}
if !reflect.DeepEqual(nm2.SelfNode, wantNode) {
@@ -470,155 +537,6 @@ func TestNetmapForResponse(t *testing.T) {
})
}
// TestDeltaDebug tests that tailcfg.Debug values can be omitted in MapResponses
// entirely or have their opt.Bool values unspecified between MapResponses in a
// session and that should mean no change. (as of capver 37). But two Debug
// fields existed prior to capver 37 that weren't opt.Bool; we test that we both
// still accept the non-opt.Bool form from control for RandomizeClientPort and
// ForceBackgroundSTUN and also accept the new form, keeping the old form in
// sync.
func TestDeltaDebug(t *testing.T) {
type step struct {
got *tailcfg.Debug
want *tailcfg.Debug
}
tests := []struct {
name string
steps []step
}{
{
name: "nothing-to-nothing",
steps: []step{
{nil, nil},
{nil, nil},
},
},
{
name: "sticky-with-old-style-randomize-client-port",
steps: []step{
{
&tailcfg.Debug{RandomizeClientPort: true},
&tailcfg.Debug{
RandomizeClientPort: true,
SetRandomizeClientPort: "true",
},
},
{
nil, // not sent by server
&tailcfg.Debug{
RandomizeClientPort: true,
SetRandomizeClientPort: "true",
},
},
},
},
{
name: "sticky-with-new-style-randomize-client-port",
steps: []step{
{
&tailcfg.Debug{SetRandomizeClientPort: "true"},
&tailcfg.Debug{
RandomizeClientPort: true,
SetRandomizeClientPort: "true",
},
},
{
nil, // not sent by server
&tailcfg.Debug{
RandomizeClientPort: true,
SetRandomizeClientPort: "true",
},
},
},
},
{
name: "opt-bool-sticky-changing-over-time",
steps: []step{
{nil, nil},
{nil, nil},
{
&tailcfg.Debug{OneCGNATRoute: "true"},
&tailcfg.Debug{OneCGNATRoute: "true"},
},
{
nil,
&tailcfg.Debug{OneCGNATRoute: "true"},
},
{
&tailcfg.Debug{OneCGNATRoute: "false"},
&tailcfg.Debug{OneCGNATRoute: "false"},
},
{
nil,
&tailcfg.Debug{OneCGNATRoute: "false"},
},
},
},
{
name: "legacy-ForceBackgroundSTUN",
steps: []step{
{
&tailcfg.Debug{ForceBackgroundSTUN: true},
&tailcfg.Debug{ForceBackgroundSTUN: true, SetForceBackgroundSTUN: "true"},
},
},
},
{
name: "opt-bool-SetForceBackgroundSTUN",
steps: []step{
{
&tailcfg.Debug{SetForceBackgroundSTUN: "true"},
&tailcfg.Debug{ForceBackgroundSTUN: true, SetForceBackgroundSTUN: "true"},
},
},
},
{
name: "server-reset-to-default",
steps: []step{
{
&tailcfg.Debug{SetForceBackgroundSTUN: "true"},
&tailcfg.Debug{ForceBackgroundSTUN: true, SetForceBackgroundSTUN: "true"},
},
{
&tailcfg.Debug{SetForceBackgroundSTUN: "unset"},
&tailcfg.Debug{ForceBackgroundSTUN: false, SetForceBackgroundSTUN: "unset"},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ms := newTestMapSession(t)
for stepi, s := range tt.steps {
nm := ms.netmapForResponse(&tailcfg.MapResponse{Debug: s.got})
if !reflect.DeepEqual(nm.Debug, s.want) {
t.Errorf("unexpected result at step index %v; got: %s", stepi, must.Get(json.Marshal(nm.Debug)))
}
}
})
}
}
// Verifies that copyDebugOptBools doesn't missing any opt.Bools.
func TestCopyDebugOptBools(t *testing.T) {
rt := reflect.TypeOf(tailcfg.Debug{})
for i := 0; i < rt.NumField(); i++ {
sf := rt.Field(i)
if sf.Type != reflect.TypeOf(opt.Bool("")) {
continue
}
var src, dst tailcfg.Debug
reflect.ValueOf(&src).Elem().Field(i).Set(reflect.ValueOf(opt.Bool("true")))
if src == (tailcfg.Debug{}) {
t.Fatalf("failed to set field %v", sf.Name)
}
copyDebugOptBools(&dst, &src)
if src != dst {
t.Fatalf("copyDebugOptBools didn't copy field %v", sf.Name)
}
}
}
func TestDeltaDERPMap(t *testing.T) {
regions1 := map[int]*tailcfg.DERPRegion{
1: {
@@ -713,7 +631,7 @@ func TestDeltaDERPMap(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ms := newTestMapSession(t)
ms := newTestMapSession(t, nil)
for stepi, s := range tt.steps {
nm := ms.netmapForResponse(&tailcfg.MapResponse{DERPMap: s.got})
if !reflect.DeepEqual(nm.DERPMap, s.want) {
@@ -723,3 +641,64 @@ func TestDeltaDERPMap(t *testing.T) {
})
}
}
type countingNetmapUpdater struct {
full atomic.Int64
}
func (nu *countingNetmapUpdater) UpdateFullNetmap(nm *netmap.NetworkMap) {
nu.full.Add(1)
}
func BenchmarkMapSessionDelta(b *testing.B) {
for _, size := range []int{10, 100, 1_000, 10_000} {
b.Run(fmt.Sprintf("size_%d", size), func(b *testing.B) {
ctx := context.Background()
nu := &countingNetmapUpdater{}
ms := newTestMapSession(b, nu)
res := &tailcfg.MapResponse{
Node: &tailcfg.Node{
ID: 1,
Name: "foo.bar.ts.net.",
},
}
for i := 0; i < size; i++ {
res.Peers = append(res.Peers, &tailcfg.Node{
ID: tailcfg.NodeID(i + 2),
Name: fmt.Sprintf("peer%d.bar.ts.net.", i),
DERP: "127.3.3.40:10",
Addresses: []netip.Prefix{netip.MustParsePrefix("100.100.2.3/32"), netip.MustParsePrefix("fd7a:115c:a1e0::123/128")},
AllowedIPs: []netip.Prefix{netip.MustParsePrefix("100.100.2.3/32"), netip.MustParsePrefix("fd7a:115c:a1e0::123/128")},
Endpoints: []string{"192.168.1.2:345", "192.168.1.3:678"},
Hostinfo: (&tailcfg.Hostinfo{
OS: "fooOS",
Hostname: "MyHostname",
Services: []tailcfg.Service{
{Proto: "peerapi4", Port: 1234},
{Proto: "peerapi6", Port: 1234},
{Proto: "peerapi-dns-proxy", Port: 1},
},
}).View(),
LastSeen: ptr.To(time.Unix(int64(i), 0)),
})
}
ms.HandleNonKeepAliveMapResponse(ctx, res)
b.ResetTimer()
b.ReportAllocs()
// Now for the core of the benchmark loop, just toggle
// a single node's online status.
for i := 0; i < b.N; i++ {
if err := ms.HandleNonKeepAliveMapResponse(ctx, &tailcfg.MapResponse{
OnlineChange: map[tailcfg.NodeID]bool{
2: i%2 == 0,
},
}); err != nil {
b.Fatal(err)
}
}
})
}
}

View File

@@ -8,9 +8,9 @@ package logknob
import (
"sync/atomic"
"golang.org/x/exp/slices"
"tailscale.com/envknob"
"tailscale.com/types/logger"
"tailscale.com/types/views"
)
// TODO(andrew-d): should we have a package-global registry of logknobs? It
@@ -58,7 +58,7 @@ func (lk *LogKnob) Set(v bool) {
// about; we use this rather than a concrete type to avoid a circular
// dependency.
type NetMap interface {
SelfCapabilities() []string
SelfCapabilities() views.Slice[string]
}
// UpdateFromNetMap will enable logging if the SelfNode in the provided NetMap
@@ -68,7 +68,7 @@ func (lk *LogKnob) UpdateFromNetMap(nm NetMap) {
return
}
lk.cap.Store(slices.Contains(nm.SelfCapabilities(), lk.capName))
lk.cap.Store(views.SliceContains(nm.SelfCapabilities(), lk.capName))
}
// Do will call log with the provided format and arguments if any of the

View File

@@ -63,11 +63,11 @@ func TestLogKnob(t *testing.T) {
}
testKnob.UpdateFromNetMap(&netmap.NetworkMap{
SelfNode: &tailcfg.Node{
SelfNode: (&tailcfg.Node{
Capabilities: []string{
"https://tailscale.com/cap/testing",
},
},
}).View(),
})
if !testKnob.shouldLog() {
t.Errorf("expected shouldLog()=true")

View File

@@ -115,4 +115,4 @@
in
flake-utils.lib.eachDefaultSystem (system: flakeForSystem nixpkgs system);
}
# nix-direnv cache busting line: sha256-Fr4VZcKrXnT1PZuEG110KBefjcZzRsQRBSvByELKAy4=
# nix-direnv cache busting line: sha256-wPy/uDsfPq3UWE+OrGBE47kDCRMAeEI+YACU1Md2gbI=

15
go.mod
View File

@@ -18,7 +18,7 @@ require (
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/creack/pty v1.1.18
github.com/dave/jennifer v1.6.1
github.com/dblohm7/wingoes v0.0.0-20230803162905-5c6286bb8c6e
github.com/dblohm7/wingoes v0.0.0-20230821191801-fc76608aecf0
github.com/dsnet/try v0.0.3
github.com/evanw/esbuild v0.14.53
github.com/frankban/quicktest v1.14.5
@@ -74,14 +74,14 @@ require (
go.uber.org/zap v1.24.0
go4.org/mem v0.0.0-20220726221520-4f986261bf13
go4.org/netipx v0.0.0-20230728180743-ad4cb58a6516
golang.org/x/crypto v0.11.0
golang.org/x/crypto v0.12.0
golang.org/x/exp v0.0.0-20230725093048-515e97ebf090
golang.org/x/mod v0.11.0
golang.org/x/net v0.10.0
golang.org/x/net v0.14.0
golang.org/x/oauth2 v0.7.0
golang.org/x/sync v0.2.0
golang.org/x/sys v0.10.0
golang.org/x/term v0.10.0
golang.org/x/sys v0.11.0
golang.org/x/term v0.11.0
golang.org/x/time v0.3.0
golang.org/x/tools v0.9.1
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2
@@ -100,6 +100,8 @@ require (
software.sslmate.com/src/go-pkcs12 v0.2.0
)
require github.com/gorilla/securecookie v1.1.1 // indirect
require (
4d63.com/gocheckcompilerdirectives v1.2.1 // indirect
4d63.com/gochecknoglobals v0.2.1 // indirect
@@ -208,6 +210,7 @@ require (
github.com/gordonklaus/ineffassign v0.0.0-20230107090616-13ace0543b28 // indirect
github.com/goreleaser/chglog v0.5.0 // indirect
github.com/goreleaser/fileglob v1.3.0 // indirect
github.com/gorilla/csrf v1.7.1
github.com/gostaticanalysis/analysisutil v0.7.1 // indirect
github.com/gostaticanalysis/comment v1.4.2 // indirect
github.com/gostaticanalysis/forcetypeassert v0.1.0 // indirect
@@ -335,7 +338,7 @@ require (
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/exp/typeparams v0.0.0-20230425010034-47ecfdc1ba53 // indirect
golang.org/x/image v0.7.0 // indirect
golang.org/x/text v0.11.0 // indirect
golang.org/x/text v0.12.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.3.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.30.0 // indirect

View File

@@ -1 +1 @@
sha256-Fr4VZcKrXnT1PZuEG110KBefjcZzRsQRBSvByELKAy4=
sha256-wPy/uDsfPq3UWE+OrGBE47kDCRMAeEI+YACU1Md2gbI=

28
go.sum
View File

@@ -222,8 +222,8 @@ github.com/dave/jennifer v1.6.1/go.mod h1:nXbxhEmQfOZhWml3D1cDK5M1FLnMSozpbFN/m3
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dblohm7/wingoes v0.0.0-20230803162905-5c6286bb8c6e h1:tTRuQNnXKO6Ffu62nk9bnnPx/m+IyNMdFFfzsETyRO8=
github.com/dblohm7/wingoes v0.0.0-20230803162905-5c6286bb8c6e/go.mod h1:6NCrWM5jRefaG7iN0iMShPalLsljHWBh9v1zxM2f8Xs=
github.com/dblohm7/wingoes v0.0.0-20230821191801-fc76608aecf0 h1:/dgKwHVTI0J+A0zd/BHOF2CTn1deN0735cJrb+w2hbE=
github.com/dblohm7/wingoes v0.0.0-20230821191801-fc76608aecf0/go.mod h1:6NCrWM5jRefaG7iN0iMShPalLsljHWBh9v1zxM2f8Xs=
github.com/denis-tingaikin/go-header v0.4.3 h1:tEaZKAlqql6SKCY++utLmkPLd6K8IBM20Ha7UVm+mtU=
github.com/denis-tingaikin/go-header v0.4.3/go.mod h1:0wOCWuN71D5qIgE2nz9KrKmuYBAC2Mra5RassOIQ2/c=
github.com/docker/cli v23.0.5+incompatible h1:ufWmAOuD3Vmr7JP2G5K3cyuNC4YZWiAsuDEvFVVDafE=
@@ -478,6 +478,10 @@ github.com/goreleaser/fileglob v1.3.0 h1:/X6J7U8lbDpQtBvGcwwPS6OpzkNVlVEsFUVRx9+
github.com/goreleaser/fileglob v1.3.0/go.mod h1:Jx6BoXv3mbYkEzwm9THo7xbr5egkAraxkGorbJb4RxU=
github.com/goreleaser/nfpm/v2 v2.32.1-0.20230803123630-24a43c5ad7cf h1:X8rzot0Te1TYSoADyMZfPt95Afhptpj0VqicKPAcmjM=
github.com/goreleaser/nfpm/v2 v2.32.1-0.20230803123630-24a43c5ad7cf/go.mod h1:Z7rAxucnQGMGfAhpxm/UIrdH0/EcxEt91RW3mmVzx2U=
github.com/gorilla/csrf v1.7.1 h1:Ir3o2c1/Uzj6FBxMlAUB6SivgVMy1ONXwYgXn+/aHPE=
github.com/gorilla/csrf v1.7.1/go.mod h1:+a/4tCmqhG6/w4oafeAZ9pEa3/NZOWYVbD9fV0FwIQA=
github.com/gorilla/securecookie v1.1.1 h1:miw7JPhV+b/lAHSXz4qd/nN9jRiAFV5FwjeKyCS8BvQ=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
@@ -986,8 +990,8 @@ golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw
golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.11.0 h1:6Ewdq3tDic1mg5xRO4milcWCfMVQhI4NkqWWvqejpuA=
golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
golang.org/x/crypto v0.12.0 h1:tFM/ta59kqch6LlvYnPa0yx5a83cL2nHflFhYKvv9Yk=
golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1084,8 +1088,8 @@ golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.14.0 h1:BONx9s002vGdD9umnlX1Po8vOZmrgH34qlHcD1MfK14=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1182,8 +1186,8 @@ golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.4.1-0.20230131160137-e7d7f63158de/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -1192,8 +1196,8 @@ golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA=
golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.10.0 h1:3R7pNqamzBraeqj/Tj8qt1aQ2HpmlC+Cx/qL/7hn4/c=
golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
golang.org/x/term v0.11.0 h1:F9tnn/DA/Im8nCwm+fX+1/eBwi4qFjRT++MhtVC4ZX0=
golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1209,8 +1213,8 @@ golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.11.0 h1:LAntKIrcmeSKERyiOh0XMV39LXS8IE9UL2yP7+f5ij4=
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -64,7 +64,6 @@ const (
NotifyInitialState // if set, the first Notify message (sent immediately) will contain the current State + BrowseToURL
NotifyInitialPrefs // if set, the first Notify message (sent immediately) will contain the current Prefs
NotifyInitialNetMap // if set, the first Notify message (sent immediately) will contain the current NetMap
NotifyGUINetMap // if set, only use the Notify.GUINetMap; Notify.Netmap will always be nil. Also impacts NotifyInitialNetMap.
NotifyNoPrivateKeys // if set, private keys that would normally be sent in updates are zeroed out
)
@@ -82,14 +81,13 @@ type Notify struct {
// For State InUseOtherUser, ErrMessage is not critical and just contains the details.
ErrMessage *string
LoginFinished *empty.Message // non-nil when/if the login process succeeded
State *State // if non-nil, the new or current IPN state
Prefs *PrefsView // if non-nil && Valid, the new or current preferences
//NetMap *netmap.NetworkMap // if non-nil, the new or current netmap
GUINetMap *netmap.GUINetworkMap // if non-nil, the new or current netmap
Engine *EngineStatus // if non-nil, the new or current wireguard stats
BrowseToURL *string // if non-nil, UI should open a browser right now
BackendLogID *string // if non-nil, the public logtail ID used by backend
LoginFinished *empty.Message // non-nil when/if the login process succeeded
State *State // if non-nil, the new or current IPN state
Prefs *PrefsView // if non-nil && Valid, the new or current preferences
NetMap *netmap.NetworkMap // if non-nil, the new or current netmap
Engine *EngineStatus // if non-nil, the new or current wireguard stats
BrowseToURL *string // if non-nil, UI should open a browser right now
BackendLogID *string // if non-nil, the public logtail ID used by backend
// FilesWaiting if non-nil means that files are buffered in
// the Tailscale daemon and ready for local transfer to the
@@ -135,9 +133,9 @@ func (n Notify) String() string {
if n.Prefs != nil && n.Prefs.Valid() {
fmt.Fprintf(&sb, "%v ", n.Prefs.Pretty())
}
// if n.NetMap != nil {
// sb.WriteString("NetMap{...} ")
// }
if n.NetMap != nil {
sb.WriteString("NetMap{...} ")
}
if n.Engine != nil {
fmt.Fprintf(&sb, "wg=%v ", *n.Engine)
}

View File

@@ -6,6 +6,7 @@
package ipn
import (
"maps"
"net/netip"
"tailscale.com/tailcfg"
@@ -73,17 +74,13 @@ func (src *ServeConfig) Clone() *ServeConfig {
dst.Web[k] = v.Clone()
}
}
if dst.AllowFunnel != nil {
dst.AllowFunnel = map[HostPort]bool{}
for k, v := range src.AllowFunnel {
dst.AllowFunnel[k] = v
}
}
dst.AllowFunnel = maps.Clone(src.AllowFunnel)
return dst
}
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _ServeConfigCloneNeedsRegeneration = ServeConfig(struct {
InMemory bool
TCP map[uint16]*TCPPortHandler
Web map[HostPort]*WebServerConfig
AllowFunnel map[HostPort]bool

View File

@@ -79,8 +79,8 @@ func (v PrefsView) Hostname() string { return v.ж.Hostname }
func (v PrefsView) NotepadURLs() bool { return v.ж.NotepadURLs }
func (v PrefsView) ForceDaemon() bool { return v.ж.ForceDaemon }
func (v PrefsView) Egg() bool { return v.ж.Egg }
func (v PrefsView) AdvertiseRoutes() views.IPPrefixSlice {
return views.IPPrefixSliceOf(v.ж.AdvertiseRoutes)
func (v PrefsView) AdvertiseRoutes() views.Slice[netip.Prefix] {
return views.SliceOf(v.ж.AdvertiseRoutes)
}
func (v PrefsView) NoSNAT() bool { return v.ж.NoSNAT }
func (v PrefsView) NetfilterMode() preftype.NetfilterMode { return v.ж.NetfilterMode }
@@ -159,6 +159,8 @@ func (v *ServeConfigView) UnmarshalJSON(b []byte) error {
return nil
}
func (v ServeConfigView) InMemory() bool { return v.ж.InMemory }
func (v ServeConfigView) TCP() views.MapFn[uint16, *TCPPortHandler, TCPPortHandlerView] {
return views.MapFnOf(v.ж.TCP, func(t *TCPPortHandler) TCPPortHandlerView {
return t.View()
@@ -177,6 +179,7 @@ func (v ServeConfigView) AllowFunnel() views.Map[HostPort, bool] {
// A compilation failure here means this code must be regenerated, with the command at the top of this file.
var _ServeConfigViewNeedsRegeneration = ServeConfig(struct {
InMemory bool
TCP map[uint16]*TCPPortHandler
Web map[HostPort]*WebServerConfig
AllowFunnel map[HostPort]bool

View File

@@ -25,6 +25,8 @@ import (
"tailscale.com/version/distro"
)
var c2nLogHeap func(http.ResponseWriter, *http.Request) // non-nil on most platforms (c2n_pprof.go)
func (b *LocalBackend) handleC2N(w http.ResponseWriter, r *http.Request) {
writeJSON := func(v any) {
w.Header().Set("Content-Type", "application/json")
@@ -70,6 +72,13 @@ func (b *LocalBackend) handleC2N(w http.ResponseWriter, r *http.Request) {
res.Error = err.Error()
}
writeJSON(res)
case "/debug/logheap":
if c2nLogHeap != nil {
c2nLogHeap(w, r)
} else {
http.Error(w, "not implemented", http.StatusNotImplemented)
return
}
case "/ssh/usernames":
var req tailcfg.C2NSSHUsernamesRequest
if r.Method == "POST" {

17
ipn/ipnlocal/c2n_pprof.go Normal file
View File

@@ -0,0 +1,17 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js && !wasm
package ipnlocal
import (
"net/http"
"runtime/pprof"
)
func init() {
c2nLogHeap = func(w http.ResponseWriter, r *http.Request) {
pprof.WriteHeapProfile(w)
}
}

View File

@@ -27,12 +27,12 @@ import (
"os"
"path/filepath"
"runtime"
"slices"
"strings"
"sync"
"time"
"github.com/tailscale/golang-x-crypto/acme"
"golang.org/x/exp/slices"
"tailscale.com/atomicfile"
"tailscale.com/envknob"
"tailscale.com/hostinfo"

View File

@@ -18,7 +18,6 @@ import (
"time"
"github.com/google/go-cmp/cmp"
"golang.org/x/exp/maps"
"tailscale.com/ipn/store/mem"
)
@@ -112,7 +111,7 @@ func TestShouldStartDomainRenewal(t *testing.T) {
reset := func() {
renewMu.Lock()
defer renewMu.Unlock()
maps.Clear(renewCertAt)
clear(renewCertAt)
}
mustMakePair := func(template *x509.Certificate) *TLSCertKeyPair {

View File

@@ -38,6 +38,14 @@ func ips(ss ...string) (ips []netip.Addr) {
return
}
func nodeViews(v []*tailcfg.Node) []tailcfg.NodeView {
nv := make([]tailcfg.NodeView, len(v))
for i, n := range v {
nv[i] = n.View()
}
return nv
}
func TestDNSConfigForNetmap(t *testing.T) {
tests := []struct {
name string
@@ -62,7 +70,7 @@ func TestDNSConfigForNetmap(t *testing.T) {
nm: &netmap.NetworkMap{
Name: "myname.net",
Addresses: ipps("100.101.101.101"),
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
{
Name: "peera.net",
Addresses: ipps("100.102.0.1", "100.102.0.2", "fe75::1001", "fe75::1002"),
@@ -75,7 +83,7 @@ func TestDNSConfigForNetmap(t *testing.T) {
Name: "v6-only.net",
Addresses: ipps("fe75::3"), // no IPv4, so we don't ignore IPv6
},
},
}),
},
prefs: &ipn.Prefs{},
want: &dns.Config{
@@ -96,7 +104,7 @@ func TestDNSConfigForNetmap(t *testing.T) {
nm: &netmap.NetworkMap{
Name: "myname.net",
Addresses: ipps("fe75::1"),
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
{
Name: "peera.net",
Addresses: ipps("100.102.0.1", "100.102.0.2", "fe75::1001"),
@@ -109,7 +117,7 @@ func TestDNSConfigForNetmap(t *testing.T) {
Name: "v6-only.net",
Addresses: ipps("fe75::3"), // no IPv4, so we don't ignore IPv6
},
},
}),
},
prefs: &ipn.Prefs{},
want: &dns.Config{

View File

@@ -87,24 +87,26 @@ func (em *expiryManager) flagExpiredPeers(netmap *netmap.NetworkMap, localNow ti
return
}
for _, peer := range netmap.Peers {
for i, peer := range netmap.Peers {
// Nodes that don't expire have KeyExpiry set to the zero time;
// skip those and peers that are already marked as expired
// (e.g. from control).
if peer.KeyExpiry.IsZero() || peer.KeyExpiry.After(controlNow) {
delete(em.previouslyExpired, peer.StableID)
if peer.KeyExpiry().IsZero() || peer.KeyExpiry().After(controlNow) {
delete(em.previouslyExpired, peer.StableID())
continue
} else if peer.Expired {
} else if peer.Expired() {
continue
}
if !em.previouslyExpired[peer.StableID] {
em.logf("[v1] netmap: flagExpiredPeers: clearing expired peer %v", peer.StableID)
em.previouslyExpired[peer.StableID] = true
if !em.previouslyExpired[peer.StableID()] {
em.logf("[v1] netmap: flagExpiredPeers: clearing expired peer %v", peer.StableID())
em.previouslyExpired[peer.StableID()] = true
}
mut := peer.AsStruct()
// Actually mark the node as expired
peer.Expired = true
mut.Expired = true
// Control clears the Endpoints and DERP fields of expired
// nodes; do so here as well. The Expired bool is the correct
@@ -113,12 +115,14 @@ func (em *expiryManager) flagExpiredPeers(netmap *netmap.NetworkMap, localNow ti
// NOTE: this is insufficient to actually break connectivity,
// since we discover endpoints via DERP, and due to DERP return
// path optimization.
peer.Endpoints = nil
peer.DERP = ""
mut.Endpoints = nil
mut.DERP = ""
// Defense-in-depth: break the node's public key as well, in
// case something tries to communicate.
peer.Key = key.NodePublicWithBadOldPrefix(peer.Key)
mut.Key = key.NodePublicWithBadOldPrefix(peer.Key())
netmap.Peers[i] = mut.View()
}
}
@@ -144,13 +148,13 @@ func (em *expiryManager) nextPeerExpiry(nm *netmap.NetworkMap, localNow time.Tim
var nextExpiry time.Time // zero if none
for _, peer := range nm.Peers {
if peer.KeyExpiry.IsZero() {
if peer.KeyExpiry().IsZero() {
continue // tagged node
} else if peer.Expired {
} else if peer.Expired() {
// Peer already expired; Expired is set by the
// flagExpiredPeers function, above.
continue
} else if peer.KeyExpiry.Before(controlNow) {
} else if peer.KeyExpiry().Before(controlNow) {
// This peer already expired, and peer.Expired
// isn't set for some reason. Skip this node.
continue
@@ -160,14 +164,14 @@ func (em *expiryManager) nextPeerExpiry(nm *netmap.NetworkMap, localNow time.Tim
// an expiry; otherwise, only update if this node's expiry is
// sooner than the currently-stored one (since we want the
// soonest-occurring expiry time).
if nextExpiry.IsZero() || peer.KeyExpiry.Before(nextExpiry) {
nextExpiry = peer.KeyExpiry
if nextExpiry.IsZero() || peer.KeyExpiry().Before(nextExpiry) {
nextExpiry = peer.KeyExpiry()
}
}
// Ensure that we also fire this timer if our own node key expires.
if nm.SelfNode != nil {
selfExpiry := nm.SelfNode.KeyExpiry
if nm.SelfNode.Valid() {
selfExpiry := nm.SelfNode.KeyExpiry()
if selfExpiry.IsZero() {
// No expiry for self node

View File

@@ -44,38 +44,38 @@ func TestFlagExpiredPeers(t *testing.T) {
name string
controlTime *time.Time
netmap *netmap.NetworkMap
want []*tailcfg.Node
want []tailcfg.NodeView
}{
{
name: "no_expiry",
controlTime: &now,
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeInFuture),
},
}),
},
want: []*tailcfg.Node{
want: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeInFuture),
},
}),
},
{
name: "expiry",
controlTime: &now,
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeInPast),
},
}),
},
want: []*tailcfg.Node{
want: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeInPast, func(n *tailcfg.Node) {
n.Expired = true
n.Key = expiredKey
}),
},
}),
},
{
name: "bad_ControlTime",
@@ -83,29 +83,29 @@ func TestFlagExpiredPeers(t *testing.T) {
controlTime: &timeBeforeEpoch,
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeBeforeEpoch.Add(-1*time.Hour)), // before ControlTime
},
}),
},
want: []*tailcfg.Node{
want: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeBeforeEpoch.Add(-1*time.Hour)), // should have expired, but ControlTime is before epoch
},
}),
},
{
name: "tagged_node",
controlTime: &now,
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", time.Time{}), // tagged node; zero expiry
},
}),
},
want: []*tailcfg.Node{
want: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", time.Time{}), // not expired
},
}),
},
}
for _, tt := range tests {
@@ -147,92 +147,92 @@ func TestNextPeerExpiry(t *testing.T) {
{
name: "no_expiry",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", noExpiry),
n(2, "bar", noExpiry),
},
SelfNode: n(3, "self", noExpiry),
}),
SelfNode: n(3, "self", noExpiry).View(),
},
want: noExpiry,
},
{
name: "future_expiry_from_peer",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", noExpiry),
n(2, "bar", timeInFuture),
},
SelfNode: n(3, "self", noExpiry),
}),
SelfNode: n(3, "self", noExpiry).View(),
},
want: timeInFuture,
},
{
name: "future_expiry_from_self",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", noExpiry),
n(2, "bar", noExpiry),
},
SelfNode: n(3, "self", timeInFuture),
}),
SelfNode: n(3, "self", timeInFuture).View(),
},
want: timeInFuture,
},
{
name: "future_expiry_from_multiple_peers",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
n(2, "bar", timeInMoreFuture),
},
SelfNode: n(3, "self", noExpiry),
}),
SelfNode: n(3, "self", noExpiry).View(),
},
want: timeInFuture,
},
{
name: "future_expiry_from_peer_and_self",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInMoreFuture),
},
SelfNode: n(2, "self", timeInFuture),
}),
SelfNode: n(2, "self", timeInFuture).View(),
},
want: timeInFuture,
},
{
name: "only_self",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{},
SelfNode: n(1, "self", timeInFuture),
Peers: nodeViews([]*tailcfg.Node{}),
SelfNode: n(1, "self", timeInFuture).View(),
},
want: timeInFuture,
},
{
name: "peer_already_expired",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInPast),
},
SelfNode: n(2, "self", timeInFuture),
}),
SelfNode: n(2, "self", timeInFuture).View(),
},
want: timeInFuture,
},
{
name: "self_already_expired",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInFuture),
},
SelfNode: n(2, "self", timeInPast),
}),
SelfNode: n(2, "self", timeInPast).View(),
},
want: timeInFuture,
},
{
name: "all_nodes_already_expired",
netmap: &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInPast),
},
SelfNode: n(2, "self", timeInPast),
}),
SelfNode: n(2, "self", timeInPast).View(),
},
want: noExpiry,
},
@@ -263,9 +263,9 @@ func TestNextPeerExpiry(t *testing.T) {
// If we don't adjust for the local time, this would return a
// time in the past.
nm := &netmap.NetworkMap{
Peers: []*tailcfg.Node{
Peers: nodeViews([]*tailcfg.Node{
n(1, "foo", timeInPast),
},
}),
}
got := em.nextPeerExpiry(nm, now)
want := now.Add(30 * time.Second)
@@ -275,24 +275,24 @@ func TestNextPeerExpiry(t *testing.T) {
})
}
func formatNodes(nodes []*tailcfg.Node) string {
func formatNodes(nodes []tailcfg.NodeView) string {
var sb strings.Builder
for i, n := range nodes {
if i > 0 {
sb.WriteString(", ")
}
fmt.Fprintf(&sb, "(%d, %q", n.ID, n.Name)
fmt.Fprintf(&sb, "(%d, %q", n.ID(), n.Name())
if n.Online != nil {
fmt.Fprintf(&sb, ", online=%v", *n.Online)
if n.Online() != nil {
fmt.Fprintf(&sb, ", online=%v", *n.Online())
}
if n.LastSeen != nil {
fmt.Fprintf(&sb, ", lastSeen=%v", n.LastSeen.Unix())
if n.LastSeen() != nil {
fmt.Fprintf(&sb, ", lastSeen=%v", n.LastSeen().Unix())
}
if n.Key != (key.NodePublic{}) {
fmt.Fprintf(&sb, ", key=%v", n.Key.String())
if n.Key() != (key.NodePublic{}) {
fmt.Fprintf(&sb, ", key=%v", n.Key().String())
}
if n.Expired {
if n.Expired() {
fmt.Fprintf(&sb, ", expired=true")
}
sb.WriteString(")")

View File

@@ -20,6 +20,7 @@ import (
"os/user"
"path/filepath"
"runtime"
"slices"
"sort"
"strconv"
"strings"
@@ -29,7 +30,6 @@ import (
"go4.org/mem"
"go4.org/netipx"
"golang.org/x/exp/slices"
"gvisor.dev/gvisor/pkg/tcpip"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/control/controlclient"
@@ -50,6 +50,7 @@ import (
"tailscale.com/net/dnscache"
"tailscale.com/net/dnsfallback"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/net/netns"
"tailscale.com/net/netutil"
"tailscale.com/net/tsaddr"
@@ -204,7 +205,7 @@ type LocalBackend struct {
// netMap is not mutated in-place once set.
netMap *netmap.NetworkMap
nmExpiryTimer tstime.TimerController // for updating netMap on node expiry; can be nil
nodeByAddr map[netip.Addr]*tailcfg.Node
nodeByAddr map[netip.Addr]tailcfg.NodeView
activeLogin string // last logged LoginName from netMap
engineStatus ipn.EngineStatus
endpoints []tailcfg.Endpoint
@@ -241,9 +242,13 @@ type LocalBackend struct {
// ServeConfig fields. (also guarded by mu)
lastServeConfJSON mem.RO // last JSON that was parsed into serveConfig
serveConfig ipn.ServeConfigView // or !Valid if none
memServeConfig ipn.ServeConfigView // or !Valid if none
serveListeners map[netip.AddrPort]*serveListener // addrPort => serveListener
serveProxyHandlers sync.Map // string (HTTPHandler.Proxy) => *httputil.ReverseProxy
// serveStreamers is a map for those running Funnel in the foreground
// and streaming incoming requests.
serveStreamers map[uint16]map[uint32]func(ipn.FunnelRequestLog) // serve port => map of stream loggers (key is UUID)
// statusLock must be held before calling statusChanged.Wait() or
// statusChanged.Broadcast().
@@ -339,7 +344,7 @@ func NewLocalBackend(logf logger.Logf, logID logid.PublicID, sys *tsd.System, lo
b.prevIfState = netMon.InterfaceState()
// Call our linkChange code once with the current state, and
// then also whenever it changes:
b.linkChange(false, netMon.InterfaceState())
b.linkChange(&netmon.ChangeDelta{New: netMon.InterfaceState()})
b.unregisterNetMon = netMon.RegisterChangeCallback(b.linkChange)
b.unregisterHealthWatch = health.RegisterWatcher(b.onHealthChange)
@@ -505,11 +510,11 @@ func (b *LocalBackend) pauseOrResumeControlClientLocked() {
}
// linkChange is our network monitor callback, called whenever the network changes.
// major is whether ifst is different than earlier.
func (b *LocalBackend) linkChange(major bool, ifst *interfaces.State) {
func (b *LocalBackend) linkChange(delta *netmon.ChangeDelta) {
b.mu.Lock()
defer b.mu.Unlock()
ifst := delta.New
hadPAC := b.prevIfState.HasPAC()
b.prevIfState = ifst
b.pauseOrResumeControlClientLocked()
@@ -647,6 +652,7 @@ func (b *LocalBackend) UpdateStatus(sb *ipnstate.StatusBuilder) {
func (b *LocalBackend) updateStatus(sb *ipnstate.StatusBuilder, extraLocked func(*ipnstate.StatusBuilder)) {
b.mu.Lock()
defer b.mu.Unlock()
sb.MutateStatus(func(s *ipnstate.Status) {
s.Version = version.Long()
s.TUN = !b.sys.IsNetstack()
@@ -684,32 +690,50 @@ func (b *LocalBackend) updateStatus(sb *ipnstate.StatusBuilder, extraLocked func
if !prefs.ExitNodeID().IsZero() {
if exitPeer, ok := b.netMap.PeerWithStableID(prefs.ExitNodeID()); ok {
var online = false
if exitPeer.Online != nil {
online = *exitPeer.Online
if v := exitPeer.Online(); v != nil {
online = *v
}
s.ExitNodeStatus = &ipnstate.ExitNodeStatus{
ID: prefs.ExitNodeID(),
Online: online,
TailscaleIPs: exitPeer.Addresses,
TailscaleIPs: exitPeer.Addresses().AsSlice(),
}
}
}
}
}
})
var tailscaleIPs []netip.Addr
if b.netMap != nil {
for _, addr := range b.netMap.Addresses {
if addr.IsSingleIP() {
sb.AddTailscaleIP(addr.Addr())
tailscaleIPs = append(tailscaleIPs, addr.Addr())
}
}
}
sb.MutateSelfStatus(func(ss *ipnstate.PeerStatus) {
ss.OS = version.OS()
ss.Online = health.GetInPollNetMap()
if b.netMap != nil {
ss.InNetworkMap = true
ss.HostName = b.netMap.Hostinfo.Hostname
if hi := b.netMap.SelfNode.Hostinfo(); hi.Valid() {
ss.HostName = hi.Hostname()
}
ss.DNSName = b.netMap.Name
ss.UserID = b.netMap.User
if sn := b.netMap.SelfNode; sn != nil {
ss.UserID = b.netMap.User()
if sn := b.netMap.SelfNode; sn.Valid() {
peerStatusFromNode(ss, sn)
if c := sn.Capabilities; len(c) > 0 {
ss.Capabilities = append([]string(nil), c...)
if c := sn.Capabilities(); c.Len() > 0 {
ss.Capabilities = c.AsSlice()
}
}
for _, addr := range tailscaleIPs {
ss.TailscaleIPs = append(ss.TailscaleIPs, addr)
}
} else {
ss.HostName, _ = os.Hostname()
}
@@ -735,28 +759,31 @@ func (b *LocalBackend) populatePeerStatusLocked(sb *ipnstate.StatusBuilder) {
exitNodeID := b.pm.CurrentPrefs().ExitNodeID()
for _, p := range b.netMap.Peers {
var lastSeen time.Time
if p.LastSeen != nil {
lastSeen = *p.LastSeen
if p.LastSeen() != nil {
lastSeen = *p.LastSeen()
}
var tailscaleIPs = make([]netip.Addr, 0, len(p.Addresses))
for _, addr := range p.Addresses {
var tailscaleIPs = make([]netip.Addr, 0, p.Addresses().Len())
for i := range p.Addresses().LenIter() {
addr := p.Addresses().At(i)
if addr.IsSingleIP() && tsaddr.IsTailscaleIP(addr.Addr()) {
tailscaleIPs = append(tailscaleIPs, addr.Addr())
}
}
online := p.Online()
ps := &ipnstate.PeerStatus{
InNetworkMap: true,
UserID: p.User,
TailscaleIPs: tailscaleIPs,
HostName: p.Hostinfo.Hostname(),
DNSName: p.Name,
OS: p.Hostinfo.OS(),
LastSeen: lastSeen,
Online: p.Online != nil && *p.Online,
ShareeNode: p.Hostinfo.ShareeNode(),
ExitNode: p.StableID != "" && p.StableID == exitNodeID,
SSH_HostKeys: p.Hostinfo.SSH_HostKeys().AsSlice(),
Location: p.Hostinfo.Location(),
InNetworkMap: true,
UserID: p.User(),
AltSharerUserID: p.Sharer(),
TailscaleIPs: tailscaleIPs,
HostName: p.Hostinfo().Hostname(),
DNSName: p.Name(),
OS: p.Hostinfo().OS(),
LastSeen: lastSeen,
Online: online != nil && *online,
ShareeNode: p.Hostinfo().ShareeNode(),
ExitNode: p.StableID() != "" && p.StableID() == exitNodeID,
SSH_HostKeys: p.Hostinfo().SSH_HostKeys().AsSlice(),
Location: p.Hostinfo().Location(),
}
peerStatusFromNode(ps, p)
@@ -767,29 +794,30 @@ func (b *LocalBackend) populatePeerStatusLocked(sb *ipnstate.StatusBuilder) {
if u := peerAPIURL(nodeIP(p, netip.Addr.Is6), p6); u != "" {
ps.PeerAPIURL = append(ps.PeerAPIURL, u)
}
sb.AddPeer(p.Key, ps)
sb.AddPeer(p.Key(), ps)
}
}
// peerStatusFromNode copies fields that exist in the Node struct for
// current node and peers into the provided PeerStatus.
func peerStatusFromNode(ps *ipnstate.PeerStatus, n *tailcfg.Node) {
ps.ID = n.StableID
ps.Created = n.Created
ps.ExitNodeOption = tsaddr.ContainsExitRoutes(n.AllowedIPs)
if n.Tags != nil {
v := views.SliceOf(n.Tags)
func peerStatusFromNode(ps *ipnstate.PeerStatus, n tailcfg.NodeView) {
ps.PublicKey = n.Key()
ps.ID = n.StableID()
ps.Created = n.Created()
ps.ExitNodeOption = tsaddr.ContainsExitRoutes(n.AllowedIPs())
if n.Tags().Len() != 0 {
v := n.Tags()
ps.Tags = &v
}
if n.PrimaryRoutes != nil {
v := views.IPPrefixSliceOf(n.PrimaryRoutes)
if n.PrimaryRoutes().Len() != 0 {
v := n.PrimaryRoutes()
ps.PrimaryRoutes = &v
}
if n.Expired {
if n.Expired() {
ps.Expired = true
}
if t := n.KeyExpiry; !t.IsZero() {
if t := n.KeyExpiry(); !t.IsZero() {
t = t.Round(time.Second)
ps.KeyExpiry = &t
}
@@ -798,7 +826,8 @@ func peerStatusFromNode(ps *ipnstate.PeerStatus, n *tailcfg.Node) {
// WhoIs reports the node and user who owns the node with the given IP:port.
// If the IP address is a Tailscale IP, the provided port may be 0.
// If ok == true, n and u are valid.
func (b *LocalBackend) WhoIs(ipp netip.AddrPort) (n *tailcfg.Node, u tailcfg.UserProfile, ok bool) {
func (b *LocalBackend) WhoIs(ipp netip.AddrPort) (n tailcfg.NodeView, u tailcfg.UserProfile, ok bool) {
var zero tailcfg.NodeView
b.mu.Lock()
defer b.mu.Unlock()
n, ok = b.nodeByAddr[ipp.Addr()]
@@ -808,16 +837,16 @@ func (b *LocalBackend) WhoIs(ipp netip.AddrPort) (n *tailcfg.Node, u tailcfg.Use
ip, ok = b.e.WhoIsIPPort(ipp)
}
if !ok {
return nil, u, false
return zero, u, false
}
n, ok = b.nodeByAddr[ip]
if !ok {
return nil, u, false
return zero, u, false
}
}
u, ok = b.netMap.UserProfiles[n.User]
u, ok = b.netMap.UserProfiles[n.User()]
if !ok {
return nil, u, false
return zero, u, false
}
return n, u, true
}
@@ -1114,13 +1143,14 @@ func setExitNodeID(prefs *ipn.Prefs, nm *netmap.NetworkMap) (prefsChanged bool)
}
for _, peer := range nm.Peers {
for _, addr := range peer.Addresses {
for i := range peer.Addresses().LenIter() {
addr := peer.Addresses().At(i)
if !addr.IsSingleIP() || addr.Addr() != prefs.ExitNodeIP {
continue
}
// Found the node being referenced, upgrade prefs to
// reference it directly for next time.
prefs.ExitNodeID = peer.StableID
prefs.ExitNodeID = peer.StableID()
prefs.ExitNodeIP = netip.Addr{}
return true
}
@@ -1597,16 +1627,16 @@ func (b *LocalBackend) updateFilterLocked(netMap *netmap.NetworkMap, prefs ipn.P
//
// If this reports true, the packet filter is invalid (the server is either broken
// or malicious) and should be ignored for safety.
func packetFilterPermitsUnlockedNodes(peers []*tailcfg.Node, packetFilter []filter.Match) bool {
func packetFilterPermitsUnlockedNodes(peers []tailcfg.NodeView, packetFilter []filter.Match) bool {
var b netipx.IPSetBuilder
var numUnlocked int
for _, p := range peers {
if !p.UnsignedPeerAPIOnly {
if !p.UnsignedPeerAPIOnly() {
continue
}
numUnlocked++
for _, a := range p.AllowedIPs { // not only addresses!
b.AddPrefix(a)
for i := range p.AllowedIPs().LenIter() { // not only addresses!
b.AddPrefix(p.AllowedIPs().At(i))
}
}
if numUnlocked == 0 {
@@ -1764,11 +1794,11 @@ func shrinkDefaultRoute(route netip.Prefix, localInterfaceRoutes *netipx.IPSet,
// dnsCIDRsEqual determines whether two CIDR lists are equal
// for DNS map construction purposes (that is, only the first entry counts).
func dnsCIDRsEqual(newAddr, oldAddr []netip.Prefix) bool {
if len(newAddr) != len(oldAddr) {
func dnsCIDRsEqual(newAddr, oldAddr views.Slice[netip.Prefix]) bool {
if newAddr.Len() != oldAddr.Len() {
return false
}
if len(newAddr) == 0 || newAddr[0] == oldAddr[0] {
if newAddr.Len() == 0 || newAddr.At(0) == oldAddr.At(0) {
return true
}
return false
@@ -1792,16 +1822,16 @@ func dnsMapsEqual(new, old *netmap.NetworkMap) bool {
if new.Name != old.Name {
return false
}
if !dnsCIDRsEqual(new.Addresses, old.Addresses) {
if !dnsCIDRsEqual(views.SliceOf(new.Addresses), views.SliceOf(old.Addresses)) {
return false
}
for i, newPeer := range new.Peers {
oldPeer := old.Peers[i]
if newPeer.Name != oldPeer.Name {
if newPeer.Name() != oldPeer.Name() {
return false
}
if !dnsCIDRsEqual(newPeer.Addresses, oldPeer.Addresses) {
if !dnsCIDRsEqual(newPeer.Addresses(), oldPeer.Addresses()) {
return false
}
}
@@ -2300,8 +2330,10 @@ func (b *LocalBackend) setAtomicValuesFromPrefsLocked(p ipn.PrefsView) {
b.setTCPPortsIntercepted(nil)
b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{}
b.memServeConfig = ipn.ServeConfigView{}
} else {
b.containsViaIPFuncAtomic.Store(tsaddr.NewContainsIPFunc(p.AdvertiseRoutes().Filter(tsaddr.IsViaPrefix)))
filtered := tsaddr.FilterPrefixesCopy(p.AdvertiseRoutes(), tsaddr.IsViaPrefix)
b.containsViaIPFuncAtomic.Store(tsaddr.NewContainsIPFunc(filtered))
b.setTCPPortsInterceptedFromNetmapAndPrefsLocked(p)
}
}
@@ -2417,8 +2449,8 @@ func (b *LocalBackend) Ping(ctx context.Context, ip netip.Addr, pingType tailcfg
if err != nil {
pr.Err = err.Error()
}
if node != nil {
pr.NodeName = node.Name
if node.Valid() {
pr.NodeName = node.Name()
}
return pr, nil
}
@@ -2437,36 +2469,37 @@ func (b *LocalBackend) Ping(ctx context.Context, ip netip.Addr, pingType tailcfg
}
}
func (b *LocalBackend) pingPeerAPI(ctx context.Context, ip netip.Addr) (peer *tailcfg.Node, peerBase string, err error) {
func (b *LocalBackend) pingPeerAPI(ctx context.Context, ip netip.Addr) (peer tailcfg.NodeView, peerBase string, err error) {
var zero tailcfg.NodeView
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
nm := b.NetMap()
if nm == nil {
return nil, "", errors.New("no netmap")
return zero, "", errors.New("no netmap")
}
peer, ok := nm.PeerByTailscaleIP(ip)
if !ok {
return nil, "", fmt.Errorf("no peer found with Tailscale IP %v", ip)
return zero, "", fmt.Errorf("no peer found with Tailscale IP %v", ip)
}
if peer.Expired {
return nil, "", errors.New("peer's node key has expired")
if peer.Expired() {
return zero, "", errors.New("peer's node key has expired")
}
base := peerAPIBase(nm, peer)
if base == "" {
return nil, "", fmt.Errorf("no PeerAPI base found for peer %v (%v)", peer.ID, ip)
return zero, "", fmt.Errorf("no PeerAPI base found for peer %v (%v)", peer.ID(), ip)
}
outReq, err := http.NewRequestWithContext(ctx, "HEAD", base, nil)
if err != nil {
return nil, "", err
return zero, "", err
}
tr := b.Dialer().PeerAPITransport()
res, err := tr.RoundTrip(outReq)
if err != nil {
return nil, "", err
return zero, "", err
}
defer res.Body.Close() // but unnecessary on HEAD responses
if res.StatusCode != http.StatusOK {
return nil, "", fmt.Errorf("HTTP status %v", res.Status)
return zero, "", fmt.Errorf("HTTP status %v", res.Status)
}
return peer, base, nil
}
@@ -2655,7 +2688,7 @@ func (b *LocalBackend) checkExitNodePrefsLocked(p *ipn.Prefs) error {
}
func (b *LocalBackend) checkFunnelEnabledLocked(p *ipn.Prefs) error {
if p.ShieldsUp && b.serveConfig.IsFunnelOn() {
if p.ShieldsUp && (b.serveConfig.IsFunnelOn() || b.memServeConfig.IsFunnelOn()) {
return errors.New("Cannot enable shields-up when Funnel is enabled.")
}
return nil
@@ -2734,7 +2767,8 @@ func (b *LocalBackend) SetPrefs(newp *ipn.Prefs) {
// doesn't affect security or correctness. And we also don't expect people to
// modify their ServeConfig in raw mode.
func (b *LocalBackend) wantIngressLocked() bool {
return b.serveConfig.Valid() && b.serveConfig.AllowFunnel().Len() > 0
return b.serveConfig.Valid() && (b.serveConfig.AllowFunnel().Len() > 0) ||
b.memServeConfig.Valid() && (b.memServeConfig.AllowFunnel().Len() > 0)
}
// setPrefsLockedOnEntry requires b.mu be held to call it, but it
@@ -2774,7 +2808,7 @@ func (b *LocalBackend) setPrefsLockedOnEntry(caller string, newp *ipn.Prefs) ipn
}
}
if netMap != nil {
newProfile := netMap.UserProfiles[netMap.User]
newProfile := netMap.UserProfiles[netMap.User()]
if newLoginName := newProfile.LoginName; newLoginName != "" {
if !oldp.Persist().Valid() {
b.logf("active login: %s", newLoginName)
@@ -2987,7 +3021,7 @@ func (b *LocalBackend) authReconfig() {
prefs := b.pm.CurrentPrefs()
nm := b.netMap
hasPAC := b.prevIfState.HasPAC()
disableSubnetsIfPAC := nm != nil && nm.Debug != nil && nm.Debug.DisableSubnetsIfPAC.EqualBool(true)
disableSubnetsIfPAC := hasCapability(nm, tailcfg.NodeAttrDisableSubnetsIfPAC)
b.mu.Unlock()
if blocked {
@@ -3036,7 +3070,7 @@ func (b *LocalBackend) authReconfig() {
rcfg := b.routerConfig(cfg, prefs, oneCGNATRoute)
dcfg := dnsConfigForNetmap(nm, prefs, b.logf, version.OS())
err = b.e.Reconfig(cfg, rcfg, dcfg, nm.Debug)
err = b.e.Reconfig(cfg, rcfg, dcfg)
if err == wgengine.ErrNoChanges {
return
}
@@ -3052,12 +3086,11 @@ func (b *LocalBackend) authReconfig() {
// a runtime.GOOS.
func shouldUseOneCGNATRoute(nm *netmap.NetworkMap, logf logger.Logf, versionOS string) bool {
// Explicit enabling or disabling always take precedence.
if nm.Debug != nil {
if v, ok := nm.Debug.OneCGNATRoute.Get(); ok {
logf("[v1] shouldUseOneCGNATRoute: explicit=%v", v)
return v
}
if v, ok := controlclient.ControlOneCGNATSetting().Get(); ok {
logf("[v1] shouldUseOneCGNATRoute: explicit=%v", v)
return v
}
// Also prefer to do this on the Mac, so that we don't need to constantly
// update the network extension configuration (which is disruptive to
// Chrome, see https://github.com/tailscale/tailscale/issues/3102). Only
@@ -3098,17 +3131,24 @@ func dnsConfigForNetmap(nm *netmap.NetworkMap, prefs ipn.PrefsView, logf logger.
// isn't configured to make MagicDNS resolution truly
// magic. Details in
// https://github.com/tailscale/tailscale/issues/1886.
set := func(name string, addrs []netip.Prefix) {
if len(addrs) == 0 || name == "" {
set := func(name string, addrs views.Slice[netip.Prefix]) {
if addrs.Len() == 0 || name == "" {
return
}
fqdn, err := dnsname.ToFQDN(name)
if err != nil {
return // TODO: propagate error?
}
have4 := slices.ContainsFunc(addrs, tsaddr.PrefixIs4)
var have4 bool
for i := range addrs.LenIter() {
if addrs.At(i).Addr().Is4() {
have4 = true
break
}
}
var ips []netip.Addr
for _, addr := range addrs {
for i := range addrs.LenIter() {
addr := addrs.At(i)
if selfV6Only {
if addr.Addr().Is6() {
ips = append(ips, addr.Addr())
@@ -3130,9 +3170,9 @@ func dnsConfigForNetmap(nm *netmap.NetworkMap, prefs ipn.PrefsView, logf logger.
}
dcfg.Hosts[fqdn] = ips
}
set(nm.Name, nm.Addresses)
set(nm.Name, views.SliceOf(nm.Addresses))
for _, peer := range nm.Peers {
set(peer.Name, peer.Addresses)
set(peer.Name(), peer.Addresses())
}
for _, rec := range nm.DNS.ExtraRecords {
switch rec.Type {
@@ -3362,11 +3402,11 @@ func (b *LocalBackend) initPeerAPIListener() {
b.closePeerAPIListenersLocked()
selfNode := b.netMap.SelfNode
if len(b.netMap.Addresses) == 0 || selfNode == nil {
if len(b.netMap.Addresses) == 0 || !selfNode.Valid() {
return
}
fileRoot := b.fileRootLocked(selfNode.User)
fileRoot := b.fileRootLocked(selfNode.User())
if fileRoot == "" {
b.logf("peerapi starting without Taildrop directory configured")
}
@@ -3654,7 +3694,7 @@ func (b *LocalBackend) enterStateLockedOnEntry(newState ipn.State) {
b.blockEngineUpdates(true)
fallthrough
case ipn.Stopped:
err := b.e.Reconfig(&wgcfg.Config{}, &router.Config{}, &dns.Config{}, nil)
err := b.e.Reconfig(&wgcfg.Config{}, &router.Config{}, &dns.Config{})
if err != nil {
b.logf("Reconfig(down): %v", err)
}
@@ -3796,7 +3836,7 @@ func (b *LocalBackend) stateMachine() {
// a status update that predates the "I've shut down" update.
func (b *LocalBackend) stopEngineAndWait() {
b.logf("stopEngineAndWait...")
b.e.Reconfig(&wgcfg.Config{}, &router.Config{}, &dns.Config{}, nil)
b.e.Reconfig(&wgcfg.Config{}, &router.Config{}, &dns.Config{})
b.requestEngineStatusAndWait()
b.logf("stopEngineAndWait: done.")
}
@@ -3942,12 +3982,8 @@ func (b *LocalBackend) setNetInfo(ni *tailcfg.NetInfo) {
}
func hasCapability(nm *netmap.NetworkMap, cap string) bool {
if nm != nil && nm.SelfNode != nil {
for _, c := range nm.SelfNode.Capabilities {
if c == cap {
return true
}
}
if nm != nil && nm.SelfNode.Valid() {
return views.SliceContains(nm.SelfNode.Capabilities(), cap)
}
return false
}
@@ -3959,7 +3995,7 @@ func (b *LocalBackend) setNetMapLocked(nm *netmap.NetworkMap) {
b.dialer.SetNetMap(nm)
var login string
if nm != nil {
login = cmpx.Or(nm.UserProfiles[nm.User].LoginName, "<missing-profile>")
login = cmpx.Or(nm.UserProfiles[nm.User()].LoginName, "<missing-profile>")
}
b.netMap = nm
if login != b.activeLogin {
@@ -3995,20 +4031,20 @@ func (b *LocalBackend) setNetMapLocked(nm *netmap.NetworkMap) {
// Update the nodeByAddr index.
if b.nodeByAddr == nil {
b.nodeByAddr = map[netip.Addr]*tailcfg.Node{}
b.nodeByAddr = map[netip.Addr]tailcfg.NodeView{}
}
// First pass, mark everything unwanted.
for k := range b.nodeByAddr {
b.nodeByAddr[k] = nil
b.nodeByAddr[k] = tailcfg.NodeView{}
}
addNode := func(n *tailcfg.Node) {
for _, ipp := range n.Addresses {
if ipp.IsSingleIP() {
addNode := func(n tailcfg.NodeView) {
for i := range n.Addresses().LenIter() {
if ipp := n.Addresses().At(i); ipp.IsSingleIP() {
b.nodeByAddr[ipp.Addr()] = n
}
}
}
if nm.SelfNode != nil {
if nm.SelfNode.Valid() {
addNode(nm.SelfNode)
}
for _, p := range nm.Peers {
@@ -4016,7 +4052,7 @@ func (b *LocalBackend) setNetMapLocked(nm *netmap.NetworkMap) {
}
// Third pass, actually delete the unwanted items.
for k, v := range b.nodeByAddr {
if v == nil {
if !v.Valid() {
delete(b.nodeByAddr, k)
}
}
@@ -4035,11 +4071,12 @@ func (b *LocalBackend) setDebugLogsByCapabilityLocked(nm *netmap.NetworkMap) {
}
func (b *LocalBackend) reloadServeConfigLocked(prefs ipn.PrefsView) {
if b.netMap == nil || b.netMap.SelfNode == nil || !prefs.Valid() || b.pm.CurrentProfile().ID == "" {
if b.netMap == nil || !b.netMap.SelfNode.Valid() || !prefs.Valid() || b.pm.CurrentProfile().ID == "" {
// We're not logged in, so we don't have a profile.
// Don't try to load the serve config.
b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{}
// b.memServeConfig = ipn.ServeConfigView{} should we do this?
return
}
confKey := ipn.ServeConfigKey(b.pm.CurrentProfile().ID)
@@ -4049,6 +4086,7 @@ func (b *LocalBackend) reloadServeConfigLocked(prefs ipn.PrefsView) {
if err != nil {
b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{}
// b.memServeConfig = ipn.ServeConfigView{} should we do this?
return
}
if b.lastServeConfJSON.Equal(mem.B(confj)) {
@@ -4059,6 +4097,7 @@ func (b *LocalBackend) reloadServeConfigLocked(prefs ipn.PrefsView) {
if err := json.Unmarshal(confj, &conf); err != nil {
b.logf("invalid ServeConfig %q in StateStore: %v", confKey, err)
b.serveConfig = ipn.ServeConfigView{}
// b.memServeConfig = ipn.ServeConfigView{} should we do this?
return
}
b.serveConfig = conf.View()
@@ -4076,9 +4115,13 @@ func (b *LocalBackend) setTCPPortsInterceptedFromNetmapAndPrefsLocked(prefs ipn.
}
b.reloadServeConfigLocked(prefs)
if b.serveConfig.Valid() {
setServeProxy := func(sc ipn.ServeConfigView) {
if !sc.Valid() {
return
}
servePorts := make([]uint16, 0, 3)
b.serveConfig.TCP().Range(func(port uint16, _ ipn.TCPPortHandlerView) bool {
sc.TCP().Range(func(port uint16, _ ipn.TCPPortHandlerView) bool {
if port > 0 {
servePorts = append(servePorts, uint16(port))
}
@@ -4093,6 +4136,9 @@ func (b *LocalBackend) setTCPPortsInterceptedFromNetmapAndPrefsLocked(prefs ipn.
b.updateServeTCPPortNetMapAddrListenersLocked(servePorts)
}
}
setServeProxy(b.serveConfig)
setServeProxy(b.memServeConfig)
// Kick off a Hostinfo update to control if WireIngress changed.
if wire := b.wantIngressLocked(); b.hostinfo != nil && b.hostinfo.WireIngress != wire {
b.logf("Hostinfo.WireIngress changed to %v", wire)
@@ -4107,35 +4153,39 @@ func (b *LocalBackend) setTCPPortsInterceptedFromNetmapAndPrefsLocked(prefs ipn.
// backend specified in serveConfig. It expects serveConfig to be valid and
// up-to-date, so should be called after reloadServeConfigLocked.
func (b *LocalBackend) setServeProxyHandlersLocked() {
if !b.serveConfig.Valid() {
return
}
var backends map[string]bool
b.serveConfig.Web().Range(func(_ ipn.HostPort, conf ipn.WebServerConfigView) (cont bool) {
conf.Handlers().Range(func(_ string, h ipn.HTTPHandlerView) (cont bool) {
backend := h.Proxy()
if backend == "" {
// Only create proxy handlers for servers with a proxy backend.
return true
}
mak.Set(&backends, backend, true)
if _, ok := b.serveProxyHandlers.Load(backend); ok {
return true
}
f := func(sc ipn.ServeConfigView) {
if !sc.Valid() {
return
}
sc.Web().Range(func(_ ipn.HostPort, conf ipn.WebServerConfigView) (cont bool) {
conf.Handlers().Range(func(_ string, h ipn.HTTPHandlerView) (cont bool) {
backend := h.Proxy()
if backend == "" {
// Only create proxy handlers for servers with a proxy backend.
return true
}
mak.Set(&backends, backend, true)
if _, ok := b.serveProxyHandlers.Load(backend); ok {
return true
}
b.logf("serve: creating a new proxy handler for %s", backend)
p, err := b.proxyHandlerForBackend(backend)
if err != nil {
// The backend endpoint (h.Proxy) should have been validated by expandProxyTarget
// in the CLI, so just log the error here.
b.logf("[unexpected] could not create proxy for %v: %s", backend, err)
b.logf("serve: creating a new proxy handler for %s", backend)
p, err := b.proxyHandlerForBackend(backend)
if err != nil {
// The backend endpoint (h.Proxy) should have been validated by expandProxyTarget
// in the CLI, so just log the error here.
b.logf("[unexpected] could not create proxy for %v: %s", backend, err)
return true
}
b.serveProxyHandlers.Store(backend, p)
return true
}
b.serveProxyHandlers.Store(backend, p)
})
return true
})
return true
})
}
f(b.serveConfig)
f(b.memServeConfig)
// Clean up handlers for proxy backends that are no longer present
// in configuration.
@@ -4293,7 +4343,7 @@ func (b *LocalBackend) FileTargets() ([]*apitype.FileTarget, error) {
continue
}
ret = append(ret, &apitype.FileTarget{
Node: p,
Node: p.AsStruct(),
PeerAPIURL: peerAPI,
})
}
@@ -4306,15 +4356,15 @@ func (b *LocalBackend) FileTargets() ([]*apitype.FileTarget, error) {
// the netmap.
//
// b.mu must be locked.
func (b *LocalBackend) peerIsTaildropTargetLocked(p *tailcfg.Node) bool {
if b.netMap == nil || p == nil {
func (b *LocalBackend) peerIsTaildropTargetLocked(p tailcfg.NodeView) bool {
if b.netMap == nil || !p.Valid() {
return false
}
if b.netMap.User == p.User {
if b.netMap.User() == p.User() {
return true
}
if len(p.Addresses) > 0 &&
b.peerHasCapLocked(p.Addresses[0].Addr(), tailcfg.PeerCapabilityFileSharingTarget) {
if p.Addresses().Len() > 0 &&
b.peerHasCapLocked(p.Addresses().At(0).Addr(), tailcfg.PeerCapabilityFileSharingTarget) {
// Explicitly noted in the netmap ACL caps as a target.
return true
}
@@ -4374,9 +4424,9 @@ func (b *LocalBackend) registerIncomingFile(inf *incomingFile, active bool) {
}
}
func peerAPIPorts(peer *tailcfg.Node) (p4, p6 uint16) {
svcs := peer.Hostinfo.Services()
for i, n := 0, svcs.Len(); i < n; i++ {
func peerAPIPorts(peer tailcfg.NodeView) (p4, p6 uint16) {
svcs := peer.Hostinfo().Services()
for i := range svcs.LenIter() {
s := svcs.At(i)
switch s.Proto {
case tailcfg.PeerAPI4:
@@ -4402,8 +4452,8 @@ func peerAPIURL(ip netip.Addr, port uint16) string {
// peerAPIBase returns the "http://ip:port" URL base to reach peer's peerAPI.
// It returns the empty string if the peer doesn't support the peerapi
// or there's no matching address family based on the netmap's own addresses.
func peerAPIBase(nm *netmap.NetworkMap, peer *tailcfg.Node) string {
if nm == nil || peer == nil || !peer.Hostinfo.Valid() {
func peerAPIBase(nm *netmap.NetworkMap, peer tailcfg.NodeView) string {
if nm == nil || !peer.Valid() || !peer.Hostinfo().Valid() {
return ""
}
@@ -4429,8 +4479,9 @@ func peerAPIBase(nm *netmap.NetworkMap, peer *tailcfg.Node) string {
return ""
}
func nodeIP(n *tailcfg.Node, pred func(netip.Addr) bool) netip.Addr {
for _, a := range n.Addresses {
func nodeIP(n tailcfg.NodeView, pred func(netip.Addr) bool) netip.Addr {
for i := range n.Addresses().LenIter() {
a := n.Addresses().At(i)
if a.IsSingleIP() && pred(a.Addr()) {
return a.Addr()
}
@@ -4540,15 +4591,15 @@ func exitNodeCanProxyDNS(nm *netmap.NetworkMap, exitNodeID tailcfg.StableNodeID)
return "", false
}
for _, p := range nm.Peers {
if p.StableID == exitNodeID && peerCanProxyDNS(p) {
if p.StableID() == exitNodeID && peerCanProxyDNS(p) {
return peerAPIBase(nm, p) + "/dns-query", true
}
}
return "", false
}
func peerCanProxyDNS(p *tailcfg.Node) bool {
if p.Cap >= 26 {
func peerCanProxyDNS(p tailcfg.NodeView) bool {
if p.Cap() >= 26 {
// Actually added at 25
// (https://github.com/tailscale/tailscale/blob/3ae6f898cfdb58fd0e30937147dd6ce28c6808dd/tailcfg/tailcfg.go#L51)
// so anything >= 26 can do it.
@@ -4556,10 +4607,9 @@ func peerCanProxyDNS(p *tailcfg.Node) bool {
}
// If p.Cap is not populated (e.g. older control server), then do the old
// thing of searching through services.
services := p.Hostinfo.Services()
for i, n := 0, services.Len(); i < n; i++ {
s := services.At(i)
if s.Proto == tailcfg.PeerAPIDNS && s.Port >= 1 {
services := p.Hostinfo().Services()
for i := range services.LenIter() {
if s := services.At(i); s.Proto == tailcfg.PeerAPIDNS && s.Port >= 1 {
return true
}
}
@@ -4904,7 +4954,8 @@ func (b *LocalBackend) resetForProfileChangeLockedOnEntry() error {
}
b.lastServeConfJSON = mem.B(nil)
b.serveConfig = ipn.ServeConfigView{}
b.enterStateLockedOnEntry(ipn.NoState) // Reset state.
b.memServeConfig = ipn.ServeConfigView{} // is this needed?
b.enterStateLockedOnEntry(ipn.NoState) // Reset state.
health.SetLocalLogConfigHealth(nil)
return b.Start(ipn.Options{})
}

View File

@@ -87,46 +87,46 @@ func TestNetworkMapCompare(t *testing.T) {
},
{
"Peers identical",
&netmap.NetworkMap{Peers: []*tailcfg.Node{}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{})},
true,
},
{
"Peer list length",
// length of Peers list differs
&netmap.NetworkMap{Peers: []*tailcfg.Node{{}}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{}})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{})},
false,
},
{
"Node names identical",
&netmap.NetworkMap{Peers: []*tailcfg.Node{{Name: "A"}}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{{Name: "A"}}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{Name: "A"}})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{Name: "A"}})},
true,
},
{
"Node names differ",
&netmap.NetworkMap{Peers: []*tailcfg.Node{{Name: "A"}}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{{Name: "B"}}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{Name: "A"}})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{Name: "B"}})},
false,
},
{
"Node lists identical",
&netmap.NetworkMap{Peers: []*tailcfg.Node{node1, node1}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{node1, node1}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{node1, node1})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{node1, node1})},
true,
},
{
"Node lists differ",
&netmap.NetworkMap{Peers: []*tailcfg.Node{node1, node1}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{node1, node2}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{node1, node1})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{node1, node2})},
false,
},
{
"Node Users differ",
// User field is not checked.
&netmap.NetworkMap{Peers: []*tailcfg.Node{{User: 0}}},
&netmap.NetworkMap{Peers: []*tailcfg.Node{{User: 1}}},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{User: 0}})},
&netmap.NetworkMap{Peers: nodeViews([]*tailcfg.Node{{User: 1}})},
true,
},
}
@@ -483,7 +483,7 @@ func TestPeerAPIBase(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := peerAPIBase(tt.nm, tt.peer)
got := peerAPIBase(tt.nm, tt.peer.View())
if got != tt.want {
t.Errorf("got %q; want %q", got, tt.want)
}
@@ -758,7 +758,7 @@ func TestPacketFilterPermitsUnlockedNodes(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := packetFilterPermitsUnlockedNodes(tt.peers, tt.filter); got != tt.want {
if got := packetFilterPermitsUnlockedNodes(nodeViews(tt.peers), tt.filter); got != tt.want {
t.Errorf("got %v, want %v", got, tt.want)
}
})
@@ -795,9 +795,9 @@ func TestStatusWithoutPeers(t *testing.T) {
cc.send(nil, "", false, &netmap.NetworkMap{
MachineStatus: tailcfg.MachineAuthorized,
Addresses: ipps("100.101.101.101"),
SelfNode: &tailcfg.Node{
SelfNode: (&tailcfg.Node{
Addresses: ipps("100.101.101.101"),
},
}).View(),
})
got := b.StatusWithoutPeers()
if got.TailscaleIPs == nil {

View File

@@ -69,16 +69,16 @@ func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
var toDelete map[int]bool // peer index => true
for i, p := range nm.Peers {
if p.UnsignedPeerAPIOnly {
if p.UnsignedPeerAPIOnly() {
// Not subject to tailnet lock.
continue
}
if len(p.KeySignature) == 0 {
b.logf("Network lock is dropping peer %v(%v) due to missing signature", p.ID, p.StableID)
if p.KeySignature().Len() == 0 {
b.logf("Network lock is dropping peer %v(%v) due to missing signature", p.ID(), p.StableID())
mak.Set(&toDelete, i, true)
} else {
if err := b.tka.authority.NodeKeyAuthorized(p.Key, p.KeySignature); err != nil {
b.logf("Network lock is dropping peer %v(%v) due to failed signature check: %v", p.ID, p.StableID, err)
if err := b.tka.authority.NodeKeyAuthorized(p.Key(), p.KeySignature().AsSlice()); err != nil {
b.logf("Network lock is dropping peer %v(%v) due to failed signature check: %v", p.ID(), p.StableID(), err)
mak.Set(&toDelete, i, true)
}
}
@@ -86,7 +86,7 @@ func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
// nm.Peers is ordered, so deletion must be order-preserving.
if len(toDelete) > 0 {
peers := make([]*tailcfg.Node, 0, len(nm.Peers))
peers := make([]tailcfg.NodeView, 0, len(nm.Peers))
filtered := make([]ipnstate.TKAFilteredPeer, 0, len(toDelete))
for i, p := range nm.Peers {
if !toDelete[i] {
@@ -94,13 +94,14 @@ func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
} else {
// Record information about the node we filtered out.
fp := ipnstate.TKAFilteredPeer{
Name: p.Name,
ID: p.ID,
StableID: p.StableID,
TailscaleIPs: make([]netip.Addr, len(p.Addresses)),
NodeKey: p.Key,
Name: p.Name(),
ID: p.ID(),
StableID: p.StableID(),
TailscaleIPs: make([]netip.Addr, p.Addresses().Len()),
NodeKey: p.Key(),
}
for i, addr := range p.Addresses {
for i := range p.Addresses().LenIter() {
addr := p.Addresses().At(i)
if addr.IsSingleIP() && tsaddr.IsTailscaleIP(addr.Addr()) {
fp.TailscaleIPs[i] = addr.Addr()
}
@@ -115,7 +116,7 @@ func (b *LocalBackend) tkaFilterNetmapLocked(nm *netmap.NetworkMap) {
}
// Check that we ourselves are not locked out, report a health issue if so.
if nm.SelfNode != nil && b.tka.authority.NodeKeyAuthorized(nm.SelfNode.Key, nm.SelfNode.KeySignature) != nil {
if nm.SelfNode.Valid() && b.tka.authority.NodeKeyAuthorized(nm.SelfNode.Key(), nm.SelfNode.KeySignature().AsSlice()) != nil {
health.SetTKAHealth(errors.New(healthmsg.LockedOut))
} else {
health.SetTKAHealth(nil)
@@ -424,7 +425,7 @@ func (b *LocalBackend) NetworkLockStatus() *ipnstate.NetworkLockStatus {
var selfAuthorized bool
if b.netMap != nil {
selfAuthorized = b.tka.authority.NodeKeyAuthorized(b.netMap.SelfNode.Key, b.netMap.SelfNode.KeySignature) == nil
selfAuthorized = b.tka.authority.NodeKeyAuthorized(b.netMap.SelfNode.Key(), b.netMap.SelfNode.KeySignature().AsSlice()) == nil
}
keys := b.tka.authority.Keys()

View File

@@ -558,26 +558,26 @@ func TestTKAFilterNetmap(t *testing.T) {
t.Fatal(err)
}
nm := netmap.NetworkMap{
Peers: []*tailcfg.Node{
nm := &netmap.NetworkMap{
Peers: nodeViews([]*tailcfg.Node{
{ID: 1, Key: n1.Public(), KeySignature: n1GoodSig.Serialize()},
{ID: 2, Key: n2.Public(), KeySignature: nil}, // missing sig
{ID: 3, Key: n3.Public(), KeySignature: n1GoodSig.Serialize()}, // someone elses sig
{ID: 4, Key: n4.Public(), KeySignature: n4Sig.Serialize()}, // messed-up signature
{ID: 5, Key: n5.Public(), KeySignature: n5GoodSig.Serialize()},
},
}),
}
b := &LocalBackend{
logf: t.Logf,
tka: &tkaState{authority: authority},
}
b.tkaFilterNetmapLocked(&nm)
b.tkaFilterNetmapLocked(nm)
want := []*tailcfg.Node{
want := nodeViews([]*tailcfg.Node{
{ID: 1, Key: n1.Public(), KeySignature: n1GoodSig.Serialize()},
{ID: 5, Key: n5.Public(), KeySignature: n5GoodSig.Serialize()},
}
})
nodePubComparer := cmp.Comparer(func(x, y key.NodePublic) bool {
return x.Raw32() == y.Raw32()
})

View File

@@ -22,6 +22,7 @@ import (
"path"
"path/filepath"
"runtime"
"slices"
"sort"
"strconv"
"strings"
@@ -32,7 +33,6 @@ import (
"unicode/utf8"
"github.com/kortschak/wol"
"golang.org/x/exp/slices"
"golang.org/x/net/dns/dnsmessage"
"golang.org/x/net/http/httpguts"
"tailscale.com/client/tailscale/apitype"
@@ -47,6 +47,7 @@ import (
"tailscale.com/net/netutil"
"tailscale.com/net/sockstats"
"tailscale.com/tailcfg"
"tailscale.com/types/views"
"tailscale.com/util/clientmetric"
"tailscale.com/util/multierr"
"tailscale.com/version/distro"
@@ -569,14 +570,14 @@ func (pln *peerAPIListener) ServeConn(src netip.AddrPort, c net.Conn) {
return
}
nm := pln.lb.NetMap()
if nm == nil || nm.SelfNode == nil {
if nm == nil || !nm.SelfNode.Valid() {
logf("peerapi: no netmap")
c.Close()
return
}
h := &peerAPIHandler{
ps: pln.ps,
isSelf: nm.SelfNode.User == peerNode.User,
isSelf: nm.SelfNode.User() == peerNode.User(),
remoteAddr: src,
selfNode: nm.SelfNode,
peerNode: peerNode,
@@ -596,8 +597,8 @@ type peerAPIHandler struct {
ps *peerAPIServer
remoteAddr netip.AddrPort
isSelf bool // whether peerNode is owned by same user as this node
selfNode *tailcfg.Node // this node; always non-nil
peerNode *tailcfg.Node // peerNode is who's making the request
selfNode tailcfg.NodeView // this node; always non-nil
peerNode tailcfg.NodeView // peerNode is who's making the request
peerUser tailcfg.UserProfile // profile of peerNode
}
@@ -608,11 +609,11 @@ func (h *peerAPIHandler) logf(format string, a ...any) {
// isAddressValid reports whether addr is a valid destination address for this
// node originating from the peer.
func (h *peerAPIHandler) isAddressValid(addr netip.Addr) bool {
if h.peerNode.SelfNodeV4MasqAddrForThisPeer != nil {
return *h.peerNode.SelfNodeV4MasqAddrForThisPeer == addr
if v := h.peerNode.SelfNodeV4MasqAddrForThisPeer(); v != nil {
return *v == addr
}
pfx := netip.PrefixFrom(addr, addr.BitLen())
return slices.Contains(h.selfNode.Addresses, pfx)
return views.SliceContains(h.selfNode.Addresses(), pfx)
}
func (h *peerAPIHandler) validateHost(r *http.Request) error {
@@ -733,7 +734,7 @@ func (h *peerAPIHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
<body>
<h1>Hello, %s (%v)</h1>
This is my Tailscale device. Your device is %v.
`, html.EscapeString(who), h.remoteAddr.Addr(), html.EscapeString(h.peerNode.ComputedName))
`, html.EscapeString(who), h.remoteAddr.Addr(), html.EscapeString(h.peerNode.ComputedName()))
if h.isSelf {
fmt.Fprintf(w, "<p>You are the owner of this node.\n")
@@ -1024,7 +1025,7 @@ func (f *incomingFile) PartialFile() ipn.PartialFile {
// canPutFile reports whether h can put a file ("Taildrop") to this node.
func (h *peerAPIHandler) canPutFile() bool {
if h.peerNode.UnsignedPeerAPIOnly {
if h.peerNode.UnsignedPeerAPIOnly() {
// Unsigned peers can't send files.
return false
}
@@ -1034,11 +1035,11 @@ func (h *peerAPIHandler) canPutFile() bool {
// canDebug reports whether h can debug this node (goroutines, metrics,
// magicsock internal state, etc).
func (h *peerAPIHandler) canDebug() bool {
if !slices.Contains(h.selfNode.Capabilities, tailcfg.CapabilityDebug) {
if !views.SliceContains(h.selfNode.Capabilities(), tailcfg.CapabilityDebug) {
// This node does not expose debug info.
return false
}
if h.peerNode.UnsignedPeerAPIOnly {
if h.peerNode.UnsignedPeerAPIOnly() {
// Unsigned peers can't debug.
return false
}
@@ -1047,7 +1048,7 @@ func (h *peerAPIHandler) canDebug() bool {
// canWakeOnLAN reports whether h can send a Wake-on-LAN packet from this node.
func (h *peerAPIHandler) canWakeOnLAN() bool {
if h.peerNode.UnsignedPeerAPIOnly {
if h.peerNode.UnsignedPeerAPIOnly() {
return false
}
return h.isSelf || h.peerHasCap(tailcfg.PeerCapabilityWakeOnLAN)

View File

@@ -456,15 +456,15 @@ func TestHandlePeerAPI(t *testing.T) {
lb := &LocalBackend{
logf: e.logBuf.Logf,
capFileSharing: tt.capSharing,
netMap: &netmap.NetworkMap{SelfNode: selfNode},
netMap: &netmap.NetworkMap{SelfNode: selfNode.View()},
clock: &tstest.Clock{},
}
e.ph = &peerAPIHandler{
isSelf: tt.isSelf,
selfNode: selfNode,
peerNode: &tailcfg.Node{
selfNode: selfNode.View(),
peerNode: (&tailcfg.Node{
ComputedName: "some-peer-name",
},
}).View(),
ps: &peerAPIServer{
b: lb,
},
@@ -513,12 +513,12 @@ func TestFileDeleteRace(t *testing.T) {
}
ph := &peerAPIHandler{
isSelf: true,
peerNode: &tailcfg.Node{
peerNode: (&tailcfg.Node{
ComputedName: "some-peer-name",
},
selfNode: &tailcfg.Node{
}).View(),
selfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{netip.MustParsePrefix("100.100.100.101/32")},
},
}).View(),
ps: ps,
}
buf := make([]byte, 2<<20)

View File

@@ -10,10 +10,10 @@ import (
"math/rand"
"net/netip"
"runtime"
"slices"
"strings"
"time"
"golang.org/x/exp/slices"
"tailscale.com/envknob"
"tailscale.com/ipn"
"tailscale.com/types/logger"

View File

@@ -17,12 +17,13 @@ import (
"net/url"
"os"
"path"
"slices"
"strconv"
"strings"
"sync"
"time"
"golang.org/x/exp/slices"
"github.com/google/uuid"
"tailscale.com/ipn"
"tailscale.com/logtail/backoff"
"tailscale.com/net/netutil"
@@ -193,7 +194,7 @@ func (b *LocalBackend) updateServeTCPPortNetMapAddrListenersLocked(ports []uint1
b.logf("netMap is nil")
return
}
if nm.SelfNode == nil {
if !nm.SelfNode.Valid() {
b.logf("netMap SelfNode is nil")
return
}
@@ -227,22 +228,27 @@ func (b *LocalBackend) SetServeConfig(config *ipn.ServeConfig) error {
if nm == nil {
return errors.New("netMap is nil")
}
if nm.SelfNode == nil {
if !nm.SelfNode.Valid() {
return errors.New("netMap SelfNode is nil")
}
profileID := b.pm.CurrentProfile().ID
confKey := ipn.ServeConfigKey(profileID)
var bs []byte
if config != nil {
j, err := json.Marshal(config)
if err != nil {
return fmt.Errorf("encoding serve config: %w", err)
if !config.InMemory {
profileID := b.pm.CurrentProfile().ID
confKey := ipn.ServeConfigKey(profileID)
var bs []byte
if config != nil {
j, err := json.Marshal(config)
if err != nil {
return fmt.Errorf("encoding serve config: %w", err)
}
bs = j
}
bs = j
}
if err := b.store.WriteState(confKey, bs); err != nil {
return fmt.Errorf("writing ServeConfig to StateStore: %w", err)
if err := b.store.WriteState(confKey, bs); err != nil {
return fmt.Errorf("writing ServeConfig to StateStore: %w", err)
}
} else {
b.memServeConfig = config.View()
}
b.setTCPPortsInterceptedFromNetmapAndPrefsLocked(b.pm.CurrentPrefs())
@@ -251,67 +257,233 @@ func (b *LocalBackend) SetServeConfig(config *ipn.ServeConfig) error {
// ServeConfig provides a view of the current serve mappings.
// If serving is not configured, the returned view is not Valid.
func (b *LocalBackend) ServeConfig() ipn.ServeConfigView {
func (b *LocalBackend) ServeConfig(inMemory bool) ipn.ServeConfigView {
b.mu.Lock()
defer b.mu.Unlock()
if inMemory {
return b.memServeConfig
}
return b.serveConfig
}
func (b *LocalBackend) HandleIngressTCPConn(ingressPeer *tailcfg.Node, target ipn.HostPort, srcAddr netip.AddrPort, getConnOrReset func() (net.Conn, bool), sendRST func()) {
b.mu.Lock()
sc := b.serveConfig
b.mu.Unlock()
if !sc.Valid() {
b.logf("localbackend: got ingress conn w/o serveConfig; rejecting")
sendRST()
return
// StreamServe opens a stream to write any incoming connections made
// to the given HostPort out to the listening io.Writer.
//
// If Serve and Funnel were not already enabled for the HostPort in the ServeConfig,
// the backend enables it for the duration of the context's lifespan and
// then turns it back off once the context is closed. If either are already enabled,
// then they remain that way but logs are still streamed
func (b *LocalBackend) StreamServe(ctx context.Context, w io.Writer, req ipn.ServeStreamRequest) (err error) {
f, ok := w.(http.Flusher)
if !ok {
return errors.New("writer not a flusher")
}
f.Flush()
if !sc.AllowFunnel().Get(target) {
b.logf("localbackend: got ingress conn for unconfigured %q; rejecting", target)
sendRST()
return
}
_, port, err := net.SplitHostPort(string(target))
port, err := req.HostPort.Port()
if err != nil {
b.logf("localbackend: got ingress conn for bad target %q; rejecting", target)
sendRST()
return
return err
}
port16, err := strconv.ParseUint(port, 10, 16)
if err != nil {
b.logf("localbackend: got ingress conn for bad target %q; rejecting", target)
sendRST()
return
// Turn on Funnel for the given HostPort.
sc := b.ServeConfig(true).AsStruct()
if sc == nil {
sc = &ipn.ServeConfig{InMemory: true}
}
dport := uint16(port16)
if b.getTCPHandlerForFunnelFlow != nil {
handler := b.getTCPHandlerForFunnelFlow(srcAddr, dport)
if handler != nil {
c, ok := getConnOrReset()
if !ok {
b.logf("localbackend: getConn didn't complete from %v to port %v", srcAddr, dport)
return
}
handler(c)
setHandler(sc, req)
if err := b.SetServeConfig(sc); err != nil {
return fmt.Errorf("errro setting serve config: %w", err)
}
// Defer turning off Funnel once stream ends.
defer func() {
sc := b.ServeConfig(true).AsStruct()
deleteHandler(sc, req, port)
err = errors.Join(err, b.SetServeConfig(sc))
}()
var writeErrs []error
writeToStream := func(log ipn.FunnelRequestLog) {
jsonLog, err := json.Marshal(log)
if err != nil {
writeErrs = append(writeErrs, err)
return
}
if _, err := fmt.Fprintf(w, "%s\n", jsonLog); err != nil {
writeErrs = append(writeErrs, err)
return
}
f.Flush()
}
// TODO(bradfitz): pass ingressPeer etc in context to tcpHandlerForServe,
// extend serveHTTPContext or similar.
handler := b.tcpHandlerForServe(dport, srcAddr)
if handler == nil {
sendRST()
return
// Hook up connections stream.
b.mu.Lock()
mak.NonNilMapForJSON(&b.serveStreamers)
if b.serveStreamers[port] == nil {
b.serveStreamers[port] = make(map[uint32]func(ipn.FunnelRequestLog))
}
c, ok := getConnOrReset()
id := uuid.New().ID()
b.serveStreamers[port][id] = writeToStream
b.mu.Unlock()
// Clean up streamer when done.
defer func() {
b.mu.Lock()
delete(b.serveStreamers[port], id)
b.mu.Unlock()
}()
select {
case <-ctx.Done():
// Triggered by foreground `tailscale funnel` process
// (the streamer) getting closed, or by turning off Tailscale.
}
return errors.Join(writeErrs...)
}
func setHandler(sc *ipn.ServeConfig, req ipn.ServeStreamRequest) {
if sc.TCP == nil {
sc.TCP = make(map[uint16]*ipn.TCPPortHandler)
}
if _, ok := sc.TCP[443]; !ok {
sc.TCP[443] = &ipn.TCPPortHandler{
HTTPS: true,
}
}
if sc.Web == nil {
sc.Web = make(map[ipn.HostPort]*ipn.WebServerConfig)
}
wsc, ok := sc.Web[req.HostPort]
if !ok {
b.logf("localbackend: getConn didn't complete from %v to port %v", srcAddr, dport)
wsc = &ipn.WebServerConfig{}
sc.Web[req.HostPort] = wsc
}
if wsc.Handlers == nil {
wsc.Handlers = make(map[string]*ipn.HTTPHandler)
}
wsc.Handlers[req.MountPoint] = &ipn.HTTPHandler{
Proxy: req.Source,
}
if sc.AllowFunnel == nil {
sc.AllowFunnel = make(map[ipn.HostPort]bool)
}
sc.AllowFunnel[req.HostPort] = true
}
func deleteHandler(sc *ipn.ServeConfig, req ipn.ServeStreamRequest, port uint16) {
delete(sc.AllowFunnel, req.HostPort)
if sc.TCP != nil {
delete(sc.TCP, port)
}
if sc.Web == nil {
return
}
handler(c)
if sc.Web[req.HostPort] == nil {
return
}
wsc, ok := sc.Web[req.HostPort]
if !ok {
return
}
if wsc.Handlers == nil {
return
}
if _, ok := wsc.Handlers[req.MountPoint]; !ok {
return
}
delete(wsc.Handlers, req.MountPoint)
if len(wsc.Handlers) == 0 {
delete(sc.Web, req.HostPort)
}
}
func (b *LocalBackend) maybeLogServeConnection(destPort uint16, srcAddr netip.AddrPort) {
b.mu.Lock()
streamers := b.serveStreamers[destPort]
b.mu.Unlock()
if len(streamers) == 0 {
return
}
var log ipn.FunnelRequestLog
log.SrcAddr = srcAddr
log.Time = b.clock.Now()
if node, user, ok := b.WhoIs(srcAddr); ok {
log.NodeName = node.ComputedName()
if node.IsTagged() {
log.NodeTags = node.Tags().AsSlice()
} else {
log.UserLoginName = user.LoginName
log.UserDisplayName = user.DisplayName
}
}
for _, stream := range streamers {
stream(log)
}
}
func (b *LocalBackend) HandleIngressTCPConn(ingressPeer tailcfg.NodeView, target ipn.HostPort, srcAddr netip.AddrPort, getConnOrReset func() (net.Conn, bool), sendRST func()) {
b.mu.Lock()
sc := b.serveConfig
msc := b.memServeConfig
b.mu.Unlock()
f := func(sc ipn.ServeConfigView) {
if !sc.Valid() {
b.logf("localbackend: got ingress conn w/o serveConfig; rejecting")
sendRST()
return
}
if !sc.AllowFunnel().Get(target) {
b.logf("localbackend: got ingress conn for unconfigured %q; rejecting", target)
sendRST()
return
}
_, port, err := net.SplitHostPort(string(target))
if err != nil {
b.logf("localbackend: got ingress conn for bad target %q; rejecting", target)
sendRST()
return
}
port16, err := strconv.ParseUint(port, 10, 16)
if err != nil {
b.logf("localbackend: got ingress conn for bad target %q; rejecting", target)
sendRST()
return
}
dport := uint16(port16)
if b.getTCPHandlerForFunnelFlow != nil {
handler := b.getTCPHandlerForFunnelFlow(srcAddr, dport)
if handler != nil {
c, ok := getConnOrReset()
if !ok {
b.logf("localbackend: getConn didn't complete from %v to port %v", srcAddr, dport)
return
}
handler(c)
return
}
}
// TODO(bradfitz): pass ingressPeer etc in context to tcpHandlerForServe,
// extend serveHTTPContext or similar.
handler := b.tcpHandlerForServe(dport, srcAddr)
if handler == nil {
sendRST()
return
}
c, ok := getConnOrReset()
if !ok {
b.logf("localbackend: getConn didn't complete from %v to port %v", srcAddr, dport)
return
}
handler(c)
}
f(sc)
f(msc)
}
// tcpHandlerForServe returns a handler for a TCP connection to be served via
@@ -319,89 +491,100 @@ func (b *LocalBackend) HandleIngressTCPConn(ingressPeer *tailcfg.Node, target ip
func (b *LocalBackend) tcpHandlerForServe(dport uint16, srcAddr netip.AddrPort) (handler func(net.Conn) error) {
b.mu.Lock()
sc := b.serveConfig
msc := b.memServeConfig
b.mu.Unlock()
if !sc.Valid() {
b.logf("[unexpected] localbackend: got TCP conn w/o serveConfig; from %v to port %v", srcAddr, dport)
return nil
}
tcph, ok := sc.TCP().GetOk(dport)
if !ok {
b.logf("[unexpected] localbackend: got TCP conn without TCP config for port %v; from %v", dport, srcAddr)
return nil
}
if tcph.HTTPS() || tcph.HTTP() {
hs := &http.Server{
Handler: http.HandlerFunc(b.serveWebHandler),
BaseContext: func(_ net.Listener) context.Context {
return context.WithValue(context.Background(), serveHTTPContextKey{}, &serveHTTPContext{
SrcAddr: srcAddr,
DestPort: dport,
})
},
f := func(sc ipn.ServeConfigView) (handler func(net.Conn) error) {
if !sc.Valid() {
// TODO: should log only if both configs are invalid
b.logf("[unexpected] localbackend: got TCP conn w/o serveConfig; from %v to port %v", srcAddr, dport)
return nil
}
if tcph.HTTPS() {
hs.TLSConfig = &tls.Config{
GetCertificate: b.getTLSServeCertForPort(dport),
tcph, ok := sc.TCP().GetOk(dport)
if !ok {
// TODO: should log only if both configs are not ok
b.logf("[unexpected] localbackend: got TCP conn without TCP config for port %v; from %v", dport, srcAddr)
return nil
}
if tcph.HTTPS() || tcph.HTTP() {
hs := &http.Server{
Handler: http.HandlerFunc(b.serveWebHandler),
BaseContext: func(_ net.Listener) context.Context {
return context.WithValue(context.Background(), serveHTTPContextKey{}, &serveHTTPContext{
SrcAddr: srcAddr,
DestPort: dport,
})
},
}
if tcph.HTTPS() {
hs.TLSConfig = &tls.Config{
GetCertificate: b.getTLSServeCertForPort(dport),
}
return func(c net.Conn) error {
return hs.ServeTLS(netutil.NewOneConnListener(c, nil), "", "")
}
}
return func(c net.Conn) error {
return hs.ServeTLS(netutil.NewOneConnListener(c, nil), "", "")
return hs.Serve(netutil.NewOneConnListener(c, nil))
}
}
return func(c net.Conn) error {
return hs.Serve(netutil.NewOneConnListener(c, nil))
if backDst := tcph.TCPForward(); backDst != "" {
return func(conn net.Conn) error {
defer conn.Close()
b.maybeLogServeConnection(dport, srcAddr)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
backConn, err := b.dialer.SystemDial(ctx, "tcp", backDst)
cancel()
if err != nil {
b.logf("localbackend: failed to TCP proxy port %v (from %v) to %s: %v", dport, srcAddr, backDst, err)
return nil
}
defer backConn.Close()
if sni := tcph.TerminateTLS(); sni != "" {
conn = tls.Server(conn, &tls.Config{
GetCertificate: func(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
pair, err := b.GetCertPEM(ctx, sni, false)
if err != nil {
return nil, err
}
cert, err := tls.X509KeyPair(pair.CertPEM, pair.KeyPEM)
if err != nil {
return nil, err
}
return &cert, nil
},
})
}
// TODO(bradfitz): do the RegisterIPPortIdentity and
// UnregisterIPPortIdentity stuff that netstack does
errc := make(chan error, 1)
go func() {
_, err := io.Copy(backConn, conn)
errc <- err
}()
go func() {
_, err := io.Copy(conn, backConn)
errc <- err
}()
return <-errc
}
}
b.logf("closing TCP conn to port %v (from %v) with actionless TCPPortHandler", dport, srcAddr)
return nil
}
if backDst := tcph.TCPForward(); backDst != "" {
return func(conn net.Conn) error {
defer conn.Close()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
backConn, err := b.dialer.SystemDial(ctx, "tcp", backDst)
cancel()
if err != nil {
b.logf("localbackend: failed to TCP proxy port %v (from %v) to %s: %v", dport, srcAddr, backDst, err)
return nil
}
defer backConn.Close()
if sni := tcph.TerminateTLS(); sni != "" {
conn = tls.Server(conn, &tls.Config{
GetCertificate: func(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
pair, err := b.GetCertPEM(ctx, sni, false)
if err != nil {
return nil, err
}
cert, err := tls.X509KeyPair(pair.CertPEM, pair.KeyPEM)
if err != nil {
return nil, err
}
return &cert, nil
},
})
}
// TODO(bradfitz): do the RegisterIPPortIdentity and
// UnregisterIPPortIdentity stuff that netstack does
errc := make(chan error, 1)
go func() {
_, err := io.Copy(backConn, conn)
errc <- err
}()
go func() {
_, err := io.Copy(conn, backConn)
errc <- err
}()
return <-errc
}
if h := f(sc); h != nil {
return h
}
b.logf("closing TCP conn to port %v (from %v) with actionless TCPPortHandler", dport, srcAddr)
return nil
return f(msc)
}
func getServeHTTPContext(r *http.Request) (c *serveHTTPContext, ok bool) {
@@ -527,6 +710,9 @@ func (b *LocalBackend) serveWebHandler(w http.ResponseWriter, r *http.Request) {
http.NotFound(w, r)
return
}
if c, ok := getServeHTTPContext(r); ok {
b.maybeLogServeConnection(c.DestPort, c.SrcAddr)
}
if s := h.Text(); s != "" {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
io.WriteString(w, s)
@@ -662,7 +848,11 @@ func (b *LocalBackend) webServerConfig(hostname string, port uint16) (c ipn.WebS
if !b.serveConfig.Valid() {
return c, false
}
return b.serveConfig.Web().GetOk(key)
wc, ok := b.serveConfig.Web().GetOk(key)
if ok {
return wc, ok
}
return b.memServeConfig.Web().GetOk(key)
}
func (b *LocalBackend) getTLSServeCertForPort(port uint16) func(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {

View File

@@ -190,9 +190,9 @@ func TestServeHTTPProxy(t *testing.T) {
b.pm = pm
b.netMap = &netmap.NetworkMap{
SelfNode: &tailcfg.Node{
SelfNode: (&tailcfg.Node{
Name: "example.ts.net",
},
}).View(),
UserProfiles: map[tailcfg.UserID]tailcfg.UserProfile{
tailcfg.UserID(1): {
LoginName: "someone@example.com",
@@ -201,16 +201,16 @@ func TestServeHTTPProxy(t *testing.T) {
},
},
}
b.nodeByAddr = map[netip.Addr]*tailcfg.Node{
netip.MustParseAddr("100.150.151.152"): {
b.nodeByAddr = map[netip.Addr]tailcfg.NodeView{
netip.MustParseAddr("100.150.151.152"): (&tailcfg.Node{
ComputedName: "some-peer",
User: tailcfg.UserID(1),
},
netip.MustParseAddr("100.150.151.153"): {
}).View(),
netip.MustParseAddr("100.150.151.153"): (&tailcfg.Node{
ComputedName: "some-tagged-peer",
Tags: []string{"tag:server", "tag:test"},
User: tailcfg.UserID(1),
},
}).View(),
}
// Start test serve endpoint.

View File

@@ -20,12 +20,12 @@ import (
"os/exec"
"path/filepath"
"runtime"
"slices"
"strings"
"sync"
"github.com/tailscale/golang-x-crypto/ssh"
"go4.org/mem"
"golang.org/x/exp/slices"
"tailscale.com/tailcfg"
"tailscale.com/util/lineread"
"tailscale.com/util/mak"

View File

@@ -199,6 +199,10 @@ type PeerStatus struct {
OS string // HostInfo.OS
UserID tailcfg.UserID
// AltSharerUserID is the user who shared this node
// if it's different than UserID. Otherwise it's zero.
AltSharerUserID tailcfg.UserID `json:",omitempty"`
// TailscaleIPs are the IP addresses assigned to the node.
TailscaleIPs []netip.Addr
@@ -209,7 +213,7 @@ type PeerStatus struct {
// PrimaryRoutes are the routes this node is currently the primary
// subnet router for, as determined by the control plane. It does
// not include the IPs in TailscaleIPs.
PrimaryRoutes *views.IPPrefixSlice `json:",omitempty"`
PrimaryRoutes *views.Slice[netip.Prefix] `json:",omitempty"`
// Endpoints:
Addrs []string
@@ -387,6 +391,9 @@ func (sb *StatusBuilder) AddPeer(peer key.NodePublic, st *PeerStatus) {
if v := st.UserID; v != 0 {
e.UserID = v
}
if v := st.AltSharerUserID; v != 0 {
e.AltSharerUserID = v
}
if v := st.TailscaleIPs; v != nil {
e.TailscaleIPs = v
}

View File

@@ -13,19 +13,18 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/http/httputil"
"net/netip"
"net/url"
"runtime"
"slices"
"strconv"
"strings"
"sync"
"time"
"golang.org/x/exp/slices"
"tailscale.com/client/tailscale/apitype"
"tailscale.com/envknob"
"tailscale.com/health"
@@ -99,6 +98,7 @@ var handler = map[string]localAPIHandler{
"set-expiry-sooner": (*Handler).serveSetExpirySooner,
"start": (*Handler).serveStart,
"status": (*Handler).serveStatus,
"stream-serve": (*Handler).serveStreamServe,
"tka/init": (*Handler).serveTKAInit,
"tka/log": (*Handler).serveTKALog,
"tka/modify": (*Handler).serveTKAModify,
@@ -339,8 +339,8 @@ func (h *Handler) serveBugReport(w http.ResponseWriter, r *http.Request) {
// Information about the current node from the netmap
if nm := h.b.NetMap(); nm != nil {
if self := nm.SelfNode; self != nil {
h.logf("user bugreport node info: nodeid=%q stableid=%q expiry=%q", self.ID, self.StableID, self.KeyExpiry.Format(time.RFC3339))
if self := nm.SelfNode; self.Valid() {
h.logf("user bugreport node info: nodeid=%q stableid=%q expiry=%q", self.ID(), self.StableID(), self.KeyExpiry().Format(time.RFC3339))
}
h.logf("user bugreport public keys: machine=%q node=%q", nm.MachineKey, nm.NodeKey)
} else {
@@ -437,8 +437,8 @@ func (h *Handler) serveWhoIs(w http.ResponseWriter, r *http.Request) {
return
}
res := &apitype.WhoIsResponse{
Node: n, // always non-nil per WhoIsResponse contract
UserProfile: &u, // always non-nil per WhoIsResponse contract
Node: n.AsStruct(), // always non-nil per WhoIsResponse contract
UserProfile: &u, // always non-nil per WhoIsResponse contract
CapMap: b.PeerCaps(ipp.Addr()),
}
j, err := json.MarshalIndent(res, "", "\t")
@@ -835,7 +835,7 @@ func (h *Handler) serveServeConfig(w http.ResponseWriter, r *http.Request) {
return
}
w.Header().Set("Content-Type", "application/json")
config := h.b.ServeConfig()
config := h.b.ServeConfig(r.FormValue("memory") == "true")
json.NewEncoder(w).Encode(config)
case "POST":
if !h.PermitWrite {
@@ -857,6 +857,31 @@ func (h *Handler) serveServeConfig(w http.ResponseWriter, r *http.Request) {
}
}
// serveStreamServe handles foreground serve and funnel streams. This is
// currently in development per https://github.com/tailscale/tailscale/issues/8489
func (h *Handler) serveStreamServe(w http.ResponseWriter, r *http.Request) {
if !h.PermitWrite {
// Write permission required because we modify the ServeConfig.
http.Error(w, "serve stream denied", http.StatusForbidden)
return
}
if r.Method != "POST" {
http.Error(w, "POST required", http.StatusMethodNotAllowed)
return
}
var req ipn.ServeStreamRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeErrorJSON(w, fmt.Errorf("decoding HostPort: %w", err))
return
}
w.Header().Set("Content-Type", "application/json")
if err := h.b.StreamServe(r.Context(), w, req); err != nil {
writeErrorJSON(w, fmt.Errorf("streaming serve: %w", err))
return
}
w.WriteHeader(http.StatusOK)
}
func (h *Handler) serveCheckIPForwarding(w http.ResponseWriter, r *http.Request) {
if !h.PermitRead {
http.Error(w, "IP forwarding check access denied", http.StatusForbidden)
@@ -1682,7 +1707,7 @@ func (h *Handler) serveTKADisable(w http.ResponseWriter, r *http.Request) {
}
body := io.LimitReader(r.Body, 1024*1024)
secret, err := ioutil.ReadAll(body)
secret, err := io.ReadAll(body)
if err != nil {
http.Error(w, "reading secret", 400)
return
@@ -1755,7 +1780,7 @@ func (h *Handler) serveTKAAffectedSigs(w http.ResponseWriter, r *http.Request) {
http.Error(w, "use POST", http.StatusMethodNotAllowed)
return
}
keyID, err := ioutil.ReadAll(http.MaxBytesReader(w, r.Body, 2048))
keyID, err := io.ReadAll(http.MaxBytesReader(w, r.Body, 2048))
if err != nil {
http.Error(w, "reading body", http.StatusBadRequest)
return
@@ -1824,7 +1849,7 @@ func (h *Handler) serveTKACosignRecoveryAUM(w http.ResponseWriter, r *http.Reque
}
body := io.LimitReader(r.Body, 1024*1024)
aumBytes, err := ioutil.ReadAll(body)
aumBytes, err := io.ReadAll(body)
if err != nil {
http.Error(w, "reading AUM", http.StatusBadRequest)
return
@@ -1855,7 +1880,7 @@ func (h *Handler) serveTKASubmitRecoveryAUM(w http.ResponseWriter, r *http.Reque
}
body := io.LimitReader(r.Body, 1024*1024)
aumBytes, err := ioutil.ReadAll(body)
aumBytes, err := io.ReadAll(body)
if err != nil {
http.Error(w, "reading AUM", http.StatusBadRequest)
return

View File

@@ -23,6 +23,7 @@ import (
"tailscale.com/tailcfg"
"tailscale.com/types/persist"
"tailscale.com/types/preftype"
"tailscale.com/types/views"
"tailscale.com/util/dnsname"
)
@@ -506,7 +507,7 @@ func (p *Prefs) AdvertisesExitNode() bool {
if p == nil {
return false
}
return tsaddr.ContainsExitRoutes(p.AdvertiseRoutes)
return tsaddr.ContainsExitRoutes(views.SliceOf(p.AdvertiseRoutes))
}
// SetAdvertiseExitNode mutates p (if non-nil) to add or remove the two

View File

@@ -9,10 +9,11 @@ import (
"net"
"net/netip"
"net/url"
"slices"
"strconv"
"strings"
"time"
"golang.org/x/exp/slices"
"tailscale.com/tailcfg"
)
@@ -25,6 +26,11 @@ func ServeConfigKey(profileID ProfileID) StateKey {
// ServeConfig is the JSON type stored in the StateStore for
// StateKey "_serve/$PROFILE_ID" as returned by ServeConfigKey.
type ServeConfig struct {
// InMemory indicates whether this config
// is persisted in the local store or is
// an in memory config
InMemory bool
// TCP are the list of TCP port numbers that tailscaled should handle for
// the Tailscale IP addresses. (not subnet routers, etc)
TCP map[uint16]*TCPPortHandler `json:",omitempty"`
@@ -42,6 +48,21 @@ type ServeConfig struct {
// There is no implicit port 443. It must contain a colon.
type HostPort string
// Port extracts just the port number from hp.
// An error is reported in the case that the hp does not
// have a valid numeric port ending.
func (hp HostPort) Port() (uint16, error) {
_, port, err := net.SplitHostPort(string(hp))
if err != nil {
return 0, err
}
port16, err := strconv.ParseUint(port, 10, 16)
if err != nil {
return 0, err
}
return uint16(port16), nil
}
// A FunnelConn wraps a net.Conn that is coming over a
// Funnel connection. It can be used to determine further
// information about the connection, like the source address
@@ -62,6 +83,42 @@ type FunnelConn struct {
Src netip.AddrPort
}
// ServeStreamRequest defines the JSON request body
// for the serve stream endpoint
type ServeStreamRequest struct {
// HostPort is the DNS and port of the tailscale
// URL.
HostPort HostPort `json:",omitempty"`
// Source is the user's serve source
// as defined in the `tailscale serve`
// command such as http://127.0.0.1:3000
Source string `json:",omitempty"`
// MountPoint is the path prefix for
// the given HostPort.
MountPoint string `json:",omitempty"`
}
// FunnelRequestLog is the JSON type written out to io.Writers
// watching funnel connections via ipnlocal.StreamServe.
//
// This structure is in development and subject to change.
type FunnelRequestLog struct {
Time time.Time `json:",omitempty"` // time of request forwarding
// SrcAddr is the address that initiated the Funnel request.
SrcAddr netip.AddrPort `json:",omitempty"`
// The following fields are only populated if the connection
// initiated from another node on the client's tailnet.
NodeName string `json:",omitempty"` // src node MagicDNS name
NodeTags []string `json:",omitempty"` // src node tags
UserLoginName string `json:",omitempty"` // src node's owner login (if not tagged)
UserDisplayName string `json:",omitempty"` // src node's owner name (if not tagged)
}
// WebServerConfig describes a web server's configuration.
type WebServerConfig struct {
Handlers map[string]*HTTPHandler // mountPoint => handler

View File

@@ -43,6 +43,8 @@ Some packages may only be included on certain architectures or operating systems
- [github.com/google/btree](https://pkg.go.dev/github.com/google/btree) ([Apache-2.0](https://github.com/google/btree/blob/v1.1.2/LICENSE))
- [github.com/google/nftables](https://pkg.go.dev/github.com/google/nftables) ([Apache-2.0](https://github.com/google/nftables/blob/9aa6fdf5a28c/LICENSE))
- [github.com/google/uuid](https://pkg.go.dev/github.com/google/uuid) ([BSD-3-Clause](https://github.com/google/uuid/blob/v1.3.0/LICENSE))
- [github.com/gorilla/csrf](https://pkg.go.dev/github.com/gorilla/csrf) ([BSD-3-Clause](https://github.com/gorilla/csrf/blob/v1.7.1/LICENSE))
- [github.com/gorilla/securecookie](https://pkg.go.dev/github.com/gorilla/securecookie) ([BSD-3-Clause](https://github.com/gorilla/securecookie/blob/v1.1.1/LICENSE))
- [github.com/hdevalence/ed25519consensus](https://pkg.go.dev/github.com/hdevalence/ed25519consensus) ([BSD-3-Clause](https://github.com/hdevalence/ed25519consensus/blob/v0.1.0/LICENSE))
- [github.com/illarion/gonotify](https://pkg.go.dev/github.com/illarion/gonotify) ([MIT](https://github.com/illarion/gonotify/blob/v1.0.1/LICENSE))
- [github.com/insomniacslk/dhcp](https://pkg.go.dev/github.com/insomniacslk/dhcp) ([BSD-3-Clause](https://github.com/insomniacslk/dhcp/blob/974c6f05fe16/LICENSE))

View File

@@ -1,45 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !js
// Package logheap logs a heap pprof profile.
package logheap
import (
"bytes"
"context"
"log"
"net/http"
"runtime"
"runtime/pprof"
"time"
)
// LogHeap uploads a JSON logtail record with the base64 heap pprof by means
// of an HTTP POST request to the endpoint referred to in postURL.
func LogHeap(postURL string) {
if postURL == "" {
return
}
runtime.GC()
buf := new(bytes.Buffer)
if err := pprof.WriteHeapProfile(buf); err != nil {
log.Printf("LogHeap: %v", err)
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "POST", postURL, buf)
if err != nil {
log.Printf("LogHeap: %v", err)
return
}
res, err := http.DefaultClient.Do(req)
if err != nil {
log.Printf("LogHeap: %v", err)
return
}
defer res.Body.Close()
}

View File

@@ -1,7 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package logheap
func LogHeap(postURL string) {
}

View File

@@ -22,7 +22,6 @@ import (
"time"
"tailscale.com/envknob"
"tailscale.com/net/interfaces"
"tailscale.com/net/netmon"
"tailscale.com/net/sockstats"
"tailscale.com/tstime"
@@ -427,8 +426,8 @@ func (l *Logger) internetUp() bool {
func (l *Logger) awaitInternetUp(ctx context.Context) {
upc := make(chan bool, 1)
defer l.netMonitor.RegisterChangeCallback(func(changed bool, st *interfaces.State) {
if st.AnyInterfaceUp() {
defer l.netMonitor.RegisterChangeCallback(func(delta *netmon.ChangeDelta) {
if delta.New.AnyInterfaceUp() {
select {
case upc <- true:
default:

View File

@@ -9,9 +9,8 @@ import (
"expvar"
"fmt"
"io"
"slices"
"strings"
"golang.org/x/exp/slices"
)
// Set is a string-to-Var map variable that satisfies the expvar.Var

View File

@@ -18,19 +18,6 @@ const (
debugStrideDelete = false
)
// strideEntry is a strideTable entry.
type strideEntry[T any] struct {
// prefixIndex is the prefixIndex(...) value that caused this stride entry's
// value to be populated, or 0 if value is nil.
//
// We need to keep track of this because allot() uses it to determine
// whether an entry was propagated from a parent entry, or if it's a
// different independent route.
prefixIndex int
// value is the value associated with the strideEntry, if any.
value *T
}
// strideTable is a binary tree that implements an 8-bit routing table.
//
// The leaves of the binary tree are host routes (/8s). Each parent is a
@@ -54,7 +41,9 @@ type strideTable[T any] struct {
// paper, it's hijacked through sneaky C memory trickery to store
// the refcount, but this is Go, where we don't store random bits
// in pointers lest we confuse the GC)
entries [lastHostIndex + 1]strideEntry[T]
//
// A nil value means no route matches the queried route.
entries [lastHostIndex + 1]*T
// children are the child tables of this table. Each child
// represents the address space within one of this table's host
// routes (/8).
@@ -112,13 +101,6 @@ func (t *strideTable[T]) getOrCreateChild(addr uint8) (child *strideTable[T], cr
return ret, false
}
// getValAndChild returns both the prefix and child strideTable for
// addr. Both returned values can be nil if no entry of that type
// exists for addr.
func (t *strideTable[T]) getValAndChild(addr uint8) (*T, *strideTable[T]) {
return t.entries[hostIndex(addr)].value, t.children[addr]
}
// findFirstChild returns the first child strideTable in t, or nil if
// t has no children.
func (t *strideTable[T]) findFirstChild() *strideTable[T] {
@@ -130,21 +112,41 @@ func (t *strideTable[T]) findFirstChild() *strideTable[T] {
return nil
}
// hasPrefixRootedAt reports whether t.entries[idx] is the root node of
// a prefix.
func (t *strideTable[T]) hasPrefixRootedAt(idx int) bool {
val := t.entries[idx]
if val == nil {
return false
}
parentIdx := parentIndex(idx)
if parentIdx == 0 {
// idx is non-nil, and is at the 0/0 route position.
return true
}
if parent := t.entries[parentIdx]; val != parent {
// parent node in the tree isn't the same prefix, so idx must
// be a root.
return true
}
return false
}
// allot updates entries whose stored prefixIndex matches oldPrefixIndex, in the
// subtree rooted at idx. Matching entries have their stored prefixIndex set to
// newPrefixIndex, and their value set to val.
//
// allot is the core of the ART algorithm, enabling efficient insertion/deletion
// while preserving very fast lookups.
func (t *strideTable[T]) allot(idx int, oldPrefixIndex, newPrefixIndex int, val *T) {
if t.entries[idx].prefixIndex != oldPrefixIndex {
// current prefixIndex isn't what we expect. This is a recursive call
// that found a child subtree that already has a more specific route
// installed. Don't touch it.
func (t *strideTable[T]) allot(idx int, old, new *T) {
if t.entries[idx] != old {
// current idx isn't what we expect. This is a recursive call
// that found a child subtree that already has a more specific
// route installed. Don't touch it.
return
}
t.entries[idx].value = val
t.entries[idx].prefixIndex = newPrefixIndex
t.entries[idx] = new
if idx >= firstHostIndex {
// The entry we just updated was a host route, we're at the bottom of
// the binary tree.
@@ -152,51 +154,73 @@ func (t *strideTable[T]) allot(idx int, oldPrefixIndex, newPrefixIndex int, val
}
// Propagate the allotment to this node's children.
left := idx << 1
t.allot(left, oldPrefixIndex, newPrefixIndex, val)
t.allot(left, old, new)
right := left + 1
t.allot(right, oldPrefixIndex, newPrefixIndex, val)
t.allot(right, old, new)
}
// insert adds the route addr/prefixLen to t, with value val.
func (t *strideTable[T]) insert(addr uint8, prefixLen int, val *T) {
func (t *strideTable[T]) insert(addr uint8, prefixLen int, val T) {
idx := prefixIndex(addr, prefixLen)
old := t.entries[idx].value
oldIdx := t.entries[idx].prefixIndex
if oldIdx == idx && old == val {
// This exact prefix+value is already in the table.
return
}
t.allot(idx, oldIdx, idx, val)
if oldIdx != idx {
// This route entry was freshly created (not just updated), that's a new
// reference.
if !t.hasPrefixRootedAt(idx) {
// This route entry is being freshly created (not just
// updated), that's a new reference.
t.routeRefs++
}
old := t.entries[idx]
// For allot to work correctly, each distinct prefix in the
// strideTable must have a different value pointer, even if val is
// identical. This new()+assignment guarantees that each inserted
// prefix gets a unique address.
p := new(T)
*p = val
t.allot(idx, old, p)
return
}
// delete removes the route addr/prefixLen from t. Returns the value
// that was associated with the deleted prefix, or nil if the prefix
// wasn't in the strideTable.
func (t *strideTable[T]) delete(addr uint8, prefixLen int) *T {
// delete removes the route addr/prefixLen from t. Reports whether the
// prefix existed in the table prior to deletion.
func (t *strideTable[T]) delete(addr uint8, prefixLen int) (wasPresent bool) {
idx := prefixIndex(addr, prefixLen)
recordedIdx := t.entries[idx].prefixIndex
if recordedIdx != idx {
if !t.hasPrefixRootedAt(idx) {
// Route entry doesn't exist
return nil
return false
}
val := t.entries[idx].value
parentIdx := idx >> 1
t.allot(idx, idx, t.entries[parentIdx].prefixIndex, t.entries[parentIdx].value)
val := t.entries[idx]
var parentVal *T
if parentIdx := parentIndex(idx); parentIdx != 0 {
parentVal = t.entries[parentIdx]
}
t.allot(idx, val, parentVal)
t.routeRefs--
return val
return true
}
// get does a route lookup for addr and returns the associated value, or nil if
// no route matched.
func (t *strideTable[T]) get(addr uint8) *T {
return t.entries[hostIndex(addr)].value
// get does a route lookup for addr and (value, true) if a matching
// route exists, or (zero, false) otherwise.
func (t *strideTable[T]) get(addr uint8) (ret T, ok bool) {
if val := t.entries[hostIndex(addr)]; val != nil {
return *val, true
}
return ret, false
}
// getValAndChild returns both the prefix value and child strideTable
// for addr. valOK reports whether a prefix value exists for addr, and
// child is non-nil if a child exists for addr.
func (t *strideTable[T]) getValAndChild(addr uint8) (val T, valOK bool, child *strideTable[T]) {
vp := t.entries[hostIndex(addr)]
if vp != nil {
val = *vp
valOK = true
}
child = t.children[addr]
return
}
// TableDebugString returns the contents of t, formatted as a table with one
@@ -208,10 +232,10 @@ func (t *strideTable[T]) tableDebugString() string {
continue
}
v := "(nil)"
if ent.value != nil {
v = fmt.Sprint(*ent.value)
if ent != nil {
v = fmt.Sprint(*ent)
}
fmt.Fprintf(&ret, "idx=%3d (%s), parent=%3d (%s), val=%v\n", i, formatPrefixTable(inversePrefixIndex(i)), ent.prefixIndex, formatPrefixTable(inversePrefixIndex((ent.prefixIndex))), v)
fmt.Fprintf(&ret, "idx=%3d (%s), val=%v\n", i, formatPrefixTable(inversePrefixIndex(i)), v)
}
return ret.String()
}
@@ -227,8 +251,8 @@ func (t *strideTable[T]) treeDebugString() string {
func (t *strideTable[T]) treeDebugStringRec(w io.Writer, idx, indent int) {
addr, len := inversePrefixIndex(idx)
if t.entries[idx].prefixIndex != 0 && t.entries[idx].prefixIndex == idx {
fmt.Fprintf(w, "%s%d/%d (%02x/%d) = %v\n", strings.Repeat(" ", indent), addr, len, addr, len, *t.entries[idx].value)
if t.hasPrefixRootedAt(idx) {
fmt.Fprintf(w, "%s%d/%d (%02x/%d) = %v\n", strings.Repeat(" ", indent), addr, len, addr, len, *t.entries[idx])
indent += 2
}
if idx >= firstHostIndex {
@@ -251,6 +275,12 @@ func prefixIndex(addr uint8, prefixLen int) int {
return (int(addr) >> (8 - prefixLen)) + (1 << prefixLen)
}
// parentIndex returns the index of idx's parent prefix, or 0 if idx
// is the index of 0/0.
func parentIndex(idx int) int {
return idx >> 1
}
// hostIndex returns the array index of the host route for addr.
// It is equivalent to prefixIndex(addr, 8).
func hostIndex(addr uint8) int {

View File

@@ -8,12 +8,12 @@ import (
"fmt"
"math/rand"
"net/netip"
"runtime"
"sort"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/types/ptr"
)
func TestInversePrefix(t *testing.T) {
@@ -65,10 +65,10 @@ func TestStrideTableInsert(t *testing.T) {
for i := 0; i < 256; i++ {
addr := uint8(i)
slowVal := slow.get(addr)
fastVal := fast.get(addr)
if slowVal != fastVal {
t.Fatalf("strideTable.get(%d) = %v, want %v", addr, *fastVal, *slowVal)
slowVal, slowOK := slow.get(addr)
fastVal, fastOK := fast.get(addr)
if !getsEqual(fastVal, fastOK, slowVal, slowOK) {
t.Fatalf("strideTable.get(%d) = (%v, %v), want (%v, %v)", addr, fastVal, fastOK, slowVal, slowOK)
}
}
}
@@ -91,10 +91,14 @@ func TestStrideTableInsertShuffled(t *testing.T) {
zero := 0
rt := strideTable[int]{}
// strideTable has a value interface, but internally has to keep
// track of distinct routes even if they all have the same
// value. rtZero uses the same value for all routes, and expects
// correct behavior.
rtZero := strideTable[int]{}
for _, route := range routes {
rt.insert(route.addr, route.len, route.val)
rtZero.insert(route.addr, route.len, &zero)
rtZero.insert(route.addr, route.len, zero)
}
// Order of insertion should not affect the final shape of the stride table.
@@ -105,15 +109,15 @@ func TestStrideTableInsertShuffled(t *testing.T) {
for _, route := range routes2 {
rt2.insert(route.addr, route.len, route.val)
}
if diff := cmp.Diff(rt, rt2, cmpDiffOpts...); diff != "" {
if diff := cmp.Diff(rt.tableDebugString(), rt2.tableDebugString()); diff != "" {
t.Errorf("tables ended up different with different insertion order (-got+want):\n%s\n\nOrder 1: %v\nOrder 2: %v", diff, formatSlowEntriesShort(routes), formatSlowEntriesShort(routes2))
}
rtZero2 := strideTable[int]{}
for _, route := range routes2 {
rtZero2.insert(route.addr, route.len, &zero)
rtZero2.insert(route.addr, route.len, zero)
}
if diff := cmp.Diff(rtZero, rtZero2, cmpDiffOpts...); diff != "" {
if diff := cmp.Diff(rtZero.tableDebugString(), rtZero2.tableDebugString(), cmpDiffOpts...); diff != "" {
t.Errorf("tables with identical vals ended up different with different insertion order (-got+want):\n%s\n\nOrder 1: %v\nOrder 2: %v", diff, formatSlowEntriesShort(routes), formatSlowEntriesShort(routes2))
}
}
@@ -150,10 +154,10 @@ func TestStrideTableDelete(t *testing.T) {
for i := 0; i < 256; i++ {
addr := uint8(i)
slowVal := slow.get(addr)
fastVal := fast.get(addr)
if slowVal != fastVal {
t.Fatalf("strideTable.get(%d) = %v, want %v", addr, *fastVal, *slowVal)
slowVal, slowOK := slow.get(addr)
fastVal, fastOK := fast.get(addr)
if !getsEqual(fastVal, fastOK, slowVal, slowOK) {
t.Fatalf("strideTable.get(%d) = (%v, %v), want (%v, %v)", addr, fastVal, fastOK, slowVal, slowOK)
}
}
}
@@ -168,10 +172,14 @@ func TestStrideTableDeleteShuffle(t *testing.T) {
zero := 0
rt := strideTable[int]{}
// strideTable has a value interface, but internally has to keep
// track of distinct routes even if they all have the same
// value. rtZero uses the same value for all routes, and expects
// correct behavior.
rtZero := strideTable[int]{}
for _, route := range routes {
rt.insert(route.addr, route.len, route.val)
rtZero.insert(route.addr, route.len, &zero)
rtZero.insert(route.addr, route.len, zero)
}
for _, route := range toDelete {
rt.delete(route.addr, route.len)
@@ -189,18 +197,18 @@ func TestStrideTableDeleteShuffle(t *testing.T) {
for _, route := range toDelete2 {
rt2.delete(route.addr, route.len)
}
if diff := cmp.Diff(rt, rt2, cmpDiffOpts...); diff != "" {
if diff := cmp.Diff(rt.tableDebugString(), rt2.tableDebugString(), cmpDiffOpts...); diff != "" {
t.Errorf("tables ended up different with different deletion order (-got+want):\n%s\n\nOrder 1: %v\nOrder 2: %v", diff, formatSlowEntriesShort(toDelete), formatSlowEntriesShort(toDelete2))
}
rtZero2 := strideTable[int]{}
for _, route := range routes {
rtZero2.insert(route.addr, route.len, &zero)
rtZero2.insert(route.addr, route.len, zero)
}
for _, route := range toDelete2 {
rtZero2.delete(route.addr, route.len)
}
if diff := cmp.Diff(rtZero, rtZero2, cmpDiffOpts...); diff != "" {
if diff := cmp.Diff(rtZero.tableDebugString(), rtZero2.tableDebugString(), cmpDiffOpts...); diff != "" {
t.Errorf("tables with identical vals ended up different with different deletion order (-got+want):\n%s\n\nOrder 1: %v\nOrder 2: %v", diff, formatSlowEntriesShort(toDelete), formatSlowEntriesShort(toDelete2))
}
}
@@ -218,31 +226,35 @@ func forStrideCountAndOrdering(b *testing.B, fn func(b *testing.B, routes []slow
routes := shufflePrefixes(allPrefixes())
for _, nroutes := range strideRouteCount {
b.Run(fmt.Sprint(nroutes), func(b *testing.B) {
routes := append([]slowEntry[int](nil), routes[:nroutes]...)
b.Run("random_order", func(b *testing.B) {
runAndRecord := func(b *testing.B) {
b.ReportAllocs()
var startMem, endMem runtime.MemStats
runtime.ReadMemStats(&startMem)
fn(b, routes)
})
runtime.ReadMemStats(&endMem)
ops := float64(b.N) * float64(len(routes))
allocs := float64(endMem.Mallocs - startMem.Mallocs)
bytes := float64(endMem.TotalAlloc - startMem.TotalAlloc)
b.ReportMetric(roundFloat64(allocs/ops), "allocs/op")
b.ReportMetric(roundFloat64(bytes/ops), "B/op")
}
routes := append([]slowEntry[int](nil), routes[:nroutes]...)
b.Run("random_order", runAndRecord)
sort.Slice(routes, func(i, j int) bool {
if routes[i].len < routes[j].len {
return true
}
return routes[i].addr < routes[j].addr
})
b.Run("largest_first", func(b *testing.B) {
b.ReportAllocs()
fn(b, routes)
})
b.Run("largest_first", runAndRecord)
sort.Slice(routes, func(i, j int) bool {
if routes[j].len < routes[i].len {
return true
}
return routes[j].addr < routes[i].addr
})
b.Run("smallest_first", func(b *testing.B) {
b.ReportAllocs()
fn(b, routes)
})
b.Run("smallest_first", runAndRecord)
})
}
}
@@ -253,7 +265,7 @@ func BenchmarkStrideTableInsertion(b *testing.B) {
for i := 0; i < b.N; i++ {
var rt strideTable[int]
for _, route := range routes {
rt.insert(route.addr, route.len, &val)
rt.insert(route.addr, route.len, val)
}
}
inserts := float64(b.N) * float64(len(routes))
@@ -269,7 +281,7 @@ func BenchmarkStrideTableDeletion(b *testing.B) {
val := 0
var rt strideTable[int]
for _, route := range routes {
rt.insert(route.addr, route.len, &val)
rt.insert(route.addr, route.len, val)
}
b.ResetTimer()
@@ -287,7 +299,7 @@ func BenchmarkStrideTableDeletion(b *testing.B) {
})
}
var writeSink *int
var writeSink int
func BenchmarkStrideTableGet(b *testing.B) {
// No need to forCountAndOrdering here, route lookup time is independent of
@@ -300,7 +312,7 @@ func BenchmarkStrideTableGet(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
writeSink = rt.get(uint8(i))
writeSink, _ = rt.get(uint8(i))
}
gets := float64(b.N)
elapsedSec := b.Elapsed().Seconds()
@@ -318,7 +330,7 @@ type slowTable[T any] struct {
type slowEntry[T any] struct {
addr uint8
len int
val *T
val T
}
func (t *slowTable[T]) String() string {
@@ -331,13 +343,14 @@ func (t *slowTable[T]) String() string {
})
var ret bytes.Buffer
for _, pfx := range pfxs {
fmt.Fprintf(&ret, "%3d/%d (%08b/%08b) = %v\n", pfx.addr, pfx.len, pfx.addr, pfxMask(pfx.len), *pfx.val)
fmt.Fprintf(&ret, "%3d/%d (%08b/%08b) = %v\n", pfx.addr, pfx.len, pfx.addr, pfxMask(pfx.len), pfx.val)
}
return ret.String()
}
func (t *slowTable[T]) insert(addr uint8, prefixLen int, val *T) {
func (t *slowTable[T]) insert(addr uint8, prefixLen int, val T) {
t.delete(addr, prefixLen) // no-op if prefix doesn't exist
t.prefixes = append(t.prefixes, slowEntry[T]{addr, prefixLen, val})
}
@@ -352,18 +365,15 @@ func (t *slowTable[T]) delete(addr uint8, prefixLen int) {
t.prefixes = pfx
}
func (t *slowTable[T]) get(addr uint8) *T {
var (
ret *T
curLen = -1
)
func (t *slowTable[T]) get(addr uint8) (ret T, ok bool) {
var curLen = -1
for _, e := range t.prefixes {
if addr&pfxMask(e.len) == e.addr && e.len >= curLen {
ret = e.val
curLen = e.len
}
}
return ret
return ret, curLen != -1
}
func pfxMask(pfxLen int) uint8 {
@@ -374,7 +384,7 @@ func allPrefixes() []slowEntry[int] {
ret := make([]slowEntry[int], 0, lastHostIndex)
for i := 1; i < lastHostIndex+1; i++ {
a, l := inversePrefixIndex(i)
ret = append(ret, slowEntry[int]{a, l, ptr.To(i)})
ret = append(ret, slowEntry[int]{a, l, i})
}
return ret
}
@@ -393,6 +403,15 @@ func formatSlowEntriesShort[T any](ents []slowEntry[T]) string {
}
var cmpDiffOpts = []cmp.Option{
cmp.AllowUnexported(strideTable[int]{}, strideEntry[int]{}),
cmp.Comparer(func(a, b netip.Prefix) bool { return a == b }),
}
func getsEqual[T comparable](a T, aOK bool, b T, bOK bool) bool {
if !aOK && !bOK {
return true
}
if aOK != bOK {
return false
}
return a == b
}

View File

@@ -51,7 +51,7 @@ func (t *Table[T]) tableForAddr(addr netip.Addr) *strideTable[T] {
// Get does a route lookup for addr and returns the associated value, or nil if
// no route matched.
func (t *Table[T]) Get(addr netip.Addr) *T {
func (t *Table[T]) Get(addr netip.Addr) (ret T, ok bool) {
t.init()
// Ideally we would use addr.AsSlice here, but AsSlice is just
@@ -84,13 +84,13 @@ func (t *Table[T]) Get(addr netip.Addr) *T {
const maxDepth = 16
type prefixAndRoute struct {
prefix netip.Prefix
route *T
route T
}
strideMatch := make([]prefixAndRoute, 0, maxDepth)
findLeaf:
for {
rt, child := st.getValAndChild(bs[i])
if rt != nil {
rt, rtOK, child := st.getValAndChild(bs[i])
if rtOK {
// This strideTable contains a route that may be relevant to our
// search, remember it.
strideMatch = append(strideMatch, prefixAndRoute{st.prefix, rt})
@@ -115,7 +115,7 @@ findLeaf:
// the correct most-specific route.
for i := len(strideMatch) - 1; i >= 0; i-- {
if m := strideMatch[i]; m.prefix.Contains(addr) {
return m.route
return m.route, true
}
}
@@ -123,16 +123,13 @@ findLeaf:
// immediately), or we went on a wild goose chase down a compressed path for
// the wrong prefix, and also found no usable routes on the way back up to
// the root. This is a miss.
return nil
return ret, false
}
// Insert adds pfx to the table, with value val.
// If pfx is already present in the table, its value is set to val.
func (t *Table[T]) Insert(pfx netip.Prefix, val *T) {
func (t *Table[T]) Insert(pfx netip.Prefix, val T) {
t.init()
if val == nil {
panic("Table.Insert called with nil value")
}
// The standard library doesn't enforce normalized prefixes (where
// the non-prefix bits are all zero). These algorithms require
@@ -423,7 +420,7 @@ func (t *Table[T]) Delete(pfx netip.Prefix) {
if debugDelete {
fmt.Printf("delete: delete from st.prefix=%s addr=%d/%d\n", st.prefix, bs[byteIdx], numBits)
}
if st.delete(bs[byteIdx], numBits) == nil {
if routeExisted := st.delete(bs[byteIdx], numBits); !routeExisted {
// We're in the right strideTable, but pfx wasn't in
// it. Refcounts haven't changed, so we can skip cleanup.
if debugDelete {

View File

@@ -12,8 +12,6 @@ import (
"strconv"
"testing"
"time"
"tailscale.com/types/ptr"
)
func TestRegression(t *testing.T) {
@@ -30,17 +28,16 @@ func TestRegression(t *testing.T) {
slow := slowPrefixTable[int]{}
p := netip.MustParsePrefix
v := ptr.To(1)
tbl.Insert(p("226.205.197.0/24"), v)
slow.insert(p("226.205.197.0/24"), v)
v = ptr.To(2)
tbl.Insert(p("226.205.0.0/16"), v)
slow.insert(p("226.205.0.0/16"), v)
tbl.Insert(p("226.205.197.0/24"), 1)
slow.insert(p("226.205.197.0/24"), 1)
tbl.Insert(p("226.205.0.0/16"), 2)
slow.insert(p("226.205.0.0/16"), 2)
probe := netip.MustParseAddr("226.205.121.152")
got, want := tbl.Get(probe), slow.get(probe)
if got != want {
t.Fatalf("got %v, want %v", got, want)
got, gotOK := tbl.Get(probe)
want, wantOK := slow.get(probe)
if !getsEqual(got, gotOK, want, wantOK) {
t.Fatalf("got (%v, %v), want (%v, %v)", got, gotOK, want, wantOK)
}
})
@@ -49,18 +46,18 @@ func TestRegression(t *testing.T) {
// within computePrefixSplit.
t1, t2 := &Table[int]{}, &Table[int]{}
p := netip.MustParsePrefix
v1, v2 := ptr.To(1), ptr.To(2)
t1.Insert(p("136.20.0.0/16"), v1)
t1.Insert(p("136.20.201.62/32"), v2)
t1.Insert(p("136.20.0.0/16"), 1)
t1.Insert(p("136.20.201.62/32"), 2)
t2.Insert(p("136.20.201.62/32"), v2)
t2.Insert(p("136.20.0.0/16"), v1)
t2.Insert(p("136.20.201.62/32"), 2)
t2.Insert(p("136.20.0.0/16"), 1)
a := netip.MustParseAddr("136.20.54.139")
got, want := t2.Get(a), t1.Get(a)
if got != want {
t.Errorf("Get(%q) is insertion order dependent (t1=%v, t2=%v)", a, want, got)
got1, ok1 := t1.Get(a)
got2, ok2 := t2.Get(a)
if !getsEqual(got1, ok1, got2, ok2) {
t.Errorf("Get(%q) is insertion order dependent: t1=(%v, %v), t2=(%v, %v)", a, got1, ok1, got2, ok2)
}
})
}
@@ -99,7 +96,7 @@ func TestInsert(t *testing.T) {
p := netip.MustParsePrefix
// Create a new leaf strideTable, with compressed path
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.168.0.1/32"), 1)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", -1},
@@ -114,7 +111,7 @@ func TestInsert(t *testing.T) {
})
// Insert into previous leaf, no tree changes
tbl.Insert(p("192.168.0.2/32"), ptr.To(2))
tbl.Insert(p("192.168.0.2/32"), 2)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -129,7 +126,7 @@ func TestInsert(t *testing.T) {
})
// Insert into previous leaf, unaligned prefix covering the /32s
tbl.Insert(p("192.168.0.0/26"), ptr.To(7))
tbl.Insert(p("192.168.0.0/26"), 7)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -144,7 +141,7 @@ func TestInsert(t *testing.T) {
})
// Create a different leaf elsewhere
tbl.Insert(p("10.0.0.0/27"), ptr.To(3))
tbl.Insert(p("10.0.0.0/27"), 3)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -159,7 +156,7 @@ func TestInsert(t *testing.T) {
})
// Insert that creates a new intermediate table and a new child
tbl.Insert(p("192.168.1.1/32"), ptr.To(4))
tbl.Insert(p("192.168.1.1/32"), 4)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -174,7 +171,7 @@ func TestInsert(t *testing.T) {
})
// Insert that creates a new intermediate table but no new child
tbl.Insert(p("192.170.0.0/16"), ptr.To(5))
tbl.Insert(p("192.170.0.0/16"), 5)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -190,7 +187,7 @@ func TestInsert(t *testing.T) {
// New leaf in a different subtree, so the next insert can test a
// variant of decompression.
tbl.Insert(p("192.180.0.1/32"), ptr.To(8))
tbl.Insert(p("192.180.0.1/32"), 8)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -206,7 +203,7 @@ func TestInsert(t *testing.T) {
// Insert that creates a new intermediate table but no new child,
// with an unaligned intermediate
tbl.Insert(p("192.180.0.0/21"), ptr.To(9))
tbl.Insert(p("192.180.0.0/21"), 9)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -221,7 +218,7 @@ func TestInsert(t *testing.T) {
})
// Insert a default route, those have their own codepath.
tbl.Insert(p("0.0.0.0/0"), ptr.To(6))
tbl.Insert(p("0.0.0.0/0"), 6)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -238,7 +235,7 @@ func TestInsert(t *testing.T) {
// Now all of the above again, but for IPv6.
// Create a new leaf strideTable, with compressed path
tbl.Insert(p("ff:aaaa::1/128"), ptr.To(1))
tbl.Insert(p("ff:aaaa::1/128"), 1)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", -1},
@@ -253,7 +250,7 @@ func TestInsert(t *testing.T) {
})
// Insert into previous leaf, no tree changes
tbl.Insert(p("ff:aaaa::2/128"), ptr.To(2))
tbl.Insert(p("ff:aaaa::2/128"), 2)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -268,7 +265,7 @@ func TestInsert(t *testing.T) {
})
// Insert into previous leaf, unaligned prefix covering the /128s
tbl.Insert(p("ff:aaaa::/125"), ptr.To(7))
tbl.Insert(p("ff:aaaa::/125"), 7)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -283,7 +280,7 @@ func TestInsert(t *testing.T) {
})
// Create a different leaf elsewhere
tbl.Insert(p("ffff:bbbb::/120"), ptr.To(3))
tbl.Insert(p("ffff:bbbb::/120"), 3)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -298,7 +295,7 @@ func TestInsert(t *testing.T) {
})
// Insert that creates a new intermediate table and a new child
tbl.Insert(p("ff:aaaa:aaaa::1/128"), ptr.To(4))
tbl.Insert(p("ff:aaaa:aaaa::1/128"), 4)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -313,7 +310,7 @@ func TestInsert(t *testing.T) {
})
// Insert that creates a new intermediate table but no new child
tbl.Insert(p("ff:aaaa:aaaa:bb00::/56"), ptr.To(5))
tbl.Insert(p("ff:aaaa:aaaa:bb00::/56"), 5)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -329,7 +326,7 @@ func TestInsert(t *testing.T) {
// New leaf in a different subtree, so the next insert can test a
// variant of decompression.
tbl.Insert(p("ff:cccc::1/128"), ptr.To(8))
tbl.Insert(p("ff:cccc::1/128"), 8)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -345,7 +342,7 @@ func TestInsert(t *testing.T) {
// Insert that creates a new intermediate table but no new child,
// with an unaligned intermediate
tbl.Insert(p("ff:cccc::/37"), ptr.To(9))
tbl.Insert(p("ff:cccc::/37"), 9)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -360,7 +357,7 @@ func TestInsert(t *testing.T) {
})
// Insert a default route, those have their own codepath.
tbl.Insert(p("::/0"), ptr.To(6))
tbl.Insert(p("::/0"), 6)
checkRoutes(t, tbl, []tableTest{
{"ff:aaaa::1", 1},
{"ff:aaaa::2", 2},
@@ -384,7 +381,7 @@ func TestDelete(t *testing.T) {
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("10.0.0.0/8"), ptr.To(1))
tbl.Insert(p("10.0.0.0/8"), 1)
checkRoutes(t, tbl, []tableTest{
{"10.0.0.1", 1},
{"255.255.255.255", -1},
@@ -403,7 +400,7 @@ func TestDelete(t *testing.T) {
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.168.0.1/32"), 1)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"255.255.255.255", -1},
@@ -421,8 +418,8 @@ func TestDelete(t *testing.T) {
// Create an intermediate with 2 children, then delete one leaf.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.180.0.1/32"), ptr.To(2))
tbl.Insert(p("192.168.0.1/32"), 1)
tbl.Insert(p("192.180.0.1/32"), 2)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.180.0.1", 2},
@@ -442,9 +439,9 @@ func TestDelete(t *testing.T) {
// Same, but the intermediate carries a route as well.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.180.0.1/32"), ptr.To(2))
tbl.Insert(p("192.0.0.0/10"), ptr.To(3))
tbl.Insert(p("192.168.0.1/32"), 1)
tbl.Insert(p("192.180.0.1/32"), 2)
tbl.Insert(p("192.0.0.0/10"), 3)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.180.0.1", 2},
@@ -466,9 +463,9 @@ func TestDelete(t *testing.T) {
// Intermediate with 3 leaves, then delete one leaf.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.180.0.1/32"), ptr.To(2))
tbl.Insert(p("192.200.0.1/32"), ptr.To(3))
tbl.Insert(p("192.168.0.1/32"), 1)
tbl.Insert(p("192.180.0.1/32"), 2)
tbl.Insert(p("192.200.0.1/32"), 3)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.180.0.1", 2},
@@ -490,7 +487,7 @@ func TestDelete(t *testing.T) {
// Delete non-existent prefix, missing strideTable path.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.168.0.1/32"), 1)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.255.0.1", -1},
@@ -509,7 +506,7 @@ func TestDelete(t *testing.T) {
// with a wrong turn.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.168.0.1/32"), 1)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.255.0.1", -1},
@@ -528,7 +525,7 @@ func TestDelete(t *testing.T) {
// leaf doesn't contain route.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.168.0.1/32"), 1)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.255.0.1", -1},
@@ -547,8 +544,8 @@ func TestDelete(t *testing.T) {
// compactable.
tbl := &Table[int]{}
checkSize(t, tbl, 2)
tbl.Insert(p("192.168.0.1/32"), ptr.To(1))
tbl.Insert(p("192.168.0.0/22"), ptr.To(2))
tbl.Insert(p("192.168.0.1/32"), 1)
tbl.Insert(p("192.168.0.0/22"), 2)
checkRoutes(t, tbl, []tableTest{
{"192.168.0.1", 1},
{"192.168.0.2", 2},
@@ -568,7 +565,7 @@ func TestDelete(t *testing.T) {
// Default routes have a special case in the code.
tbl := &Table[int]{}
tbl.Insert(p("0.0.0.0/0"), ptr.To(1))
tbl.Insert(p("0.0.0.0/0"), 1)
tbl.Delete(p("0.0.0.0/0"))
checkRoutes(t, tbl, []tableTest{
@@ -595,20 +592,20 @@ func TestInsertCompare(t *testing.T) {
t.Logf(fast.debugSummary())
}
seenVals4 := map[*int]bool{}
seenVals6 := map[*int]bool{}
seenVals4 := map[int]bool{}
seenVals6 := map[int]bool{}
for i := 0; i < 10_000; i++ {
a := randomAddr()
slowVal := slow.get(a)
fastVal := fast.Get(a)
slowVal, slowOK := slow.get(a)
fastVal, fastOK := fast.Get(a)
if !getsEqual(slowVal, slowOK, fastVal, fastOK) {
t.Fatalf("get(%q) = (%v, %v), want (%v, %v)", a, fastVal, fastOK, slowVal, slowOK)
}
if a.Is6() {
seenVals6[fastVal] = true
} else {
seenVals4[fastVal] = true
}
if slowVal != fastVal {
t.Fatalf("get(%q) = %p, want %p", a, fastVal, slowVal)
}
}
// Empirically, 10k probes into 5k v4 prefixes and 5k v6 prefixes results in
@@ -667,13 +664,10 @@ func TestInsertShuffled(t *testing.T) {
}
for _, a := range addrs {
val1 := rt.Get(a)
val2 := rt2.Get(a)
if val1 == nil && val2 == nil {
continue
}
if (val1 == nil && val2 != nil) || (val1 != nil && val2 == nil) || (*val1 != *val2) {
t.Fatalf("get(%q) = %s, want %s", a, printIntPtr(val2), printIntPtr(val1))
val1, ok1 := rt.Get(a)
val2, ok2 := rt2.Get(a)
if !getsEqual(val1, ok1, val2, ok2) {
t.Fatalf("get(%q) = (%v, %v), want (%v, %v)", a, val2, ok2, val1, ok1)
}
}
}
@@ -727,20 +721,20 @@ func TestDeleteCompare(t *testing.T) {
fast.Delete(pfx.pfx)
}
seenVals4 := map[*int]bool{}
seenVals6 := map[*int]bool{}
seenVals4 := map[int]bool{}
seenVals6 := map[int]bool{}
for i := 0; i < numProbes; i++ {
a := randomAddr()
slowVal := slow.get(a)
fastVal := fast.Get(a)
slowVal, slowOK := slow.get(a)
fastVal, fastOK := fast.Get(a)
if !getsEqual(slowVal, slowOK, fastVal, fastOK) {
t.Fatalf("get(%q) = (%v, %v), want (%v, %v)", a, fastVal, fastOK, slowVal, slowOK)
}
if a.Is6() {
seenVals6[fastVal] = true
} else {
seenVals4[fastVal] = true
}
if slowVal != fastVal {
t.Fatalf("get(%q) = %p, want %p", a, fastVal, slowVal)
}
}
// Empirically, 10k probes into 5k v4 prefixes and 5k v6 prefixes results in
// ~1k distinct values for v4 and ~300 for v6. distinct routes. This sanity
@@ -814,13 +808,10 @@ func TestDeleteShuffled(t *testing.T) {
// test for equivalence statistically with random probes instead.
for i := 0; i < numProbes; i++ {
a := randomAddr()
val1 := rt.Get(a)
val2 := rt2.Get(a)
if val1 == nil && val2 == nil {
continue
}
if (val1 == nil && val2 != nil) || (val1 != nil && val2 == nil) || (*val1 != *val2) {
t.Errorf("get(%q) = %s, want %s", a, printIntPtr(val2), printIntPtr(val1))
val1, ok1 := rt.Get(a)
val2, ok2 := rt2.Get(a)
if !getsEqual(val1, ok1, val2, ok2) {
t.Errorf("get(%q) = (%v, %v), want (%v, %v)", a, val2, ok2, val1, ok1)
}
}
}
@@ -868,12 +859,12 @@ type tableTest struct {
func checkRoutes(t *testing.T, tbl *Table[int], tt []tableTest) {
t.Helper()
for _, tc := range tt {
v := tbl.Get(netip.MustParseAddr(tc.addr))
if v == nil && tc.want != -1 {
t.Errorf("lookup %q got nil, want %d", tc.addr, tc.want)
v, ok := tbl.Get(netip.MustParseAddr(tc.addr))
if !ok && tc.want != -1 {
t.Errorf("lookup %q got (%v, %v), want (_, false)", tc.addr, v, ok)
}
if v != nil && *v != tc.want {
t.Errorf("lookup %q got %d, want %d", tc.addr, *v, tc.want)
if ok && v != tc.want {
t.Errorf("lookup %q got (%v, %v), want (%v, true)", tc.addr, v, ok, tc.want)
}
}
}
@@ -1005,7 +996,7 @@ func BenchmarkTableGet(b *testing.B) {
for i := 0; i < b.N; i++ {
addr := genAddr()
t.Start()
writeSink = rt.Get(addr)
writeSink, _ = rt.Get(addr)
t.Stop()
}
})
@@ -1112,7 +1103,7 @@ type slowPrefixTable[T any] struct {
type slowPrefixEntry[T any] struct {
pfx netip.Prefix
val *T
val T
}
func (t *slowPrefixTable[T]) delete(pfx netip.Prefix) {
@@ -1127,7 +1118,7 @@ func (t *slowPrefixTable[T]) delete(pfx netip.Prefix) {
t.prefixes = ret
}
func (t *slowPrefixTable[T]) insert(pfx netip.Prefix, val *T) {
func (t *slowPrefixTable[T]) insert(pfx netip.Prefix, val T) {
pfx = pfx.Masked()
for i, ent := range t.prefixes {
if ent.pfx == pfx {
@@ -1138,11 +1129,8 @@ func (t *slowPrefixTable[T]) insert(pfx netip.Prefix, val *T) {
t.prefixes = append(t.prefixes, slowPrefixEntry[T]{pfx, val})
}
func (t *slowPrefixTable[T]) get(addr netip.Addr) *T {
var (
ret *T
bestLen = -1
)
func (t *slowPrefixTable[T]) get(addr netip.Addr) (ret T, ok bool) {
bestLen := -1
for _, pfx := range t.prefixes {
if pfx.pfx.Contains(addr) && pfx.pfx.Bits() > bestLen {
@@ -1150,7 +1138,7 @@ func (t *slowPrefixTable[T]) get(addr netip.Addr) *T {
bestLen = pfx.pfx.Bits()
}
}
return ret
return ret, bestLen != -1
}
// randomPrefixes returns n randomly generated prefixes and associated values,
@@ -1176,7 +1164,7 @@ func randomPrefixes4(n int) []slowPrefixEntry[int] {
ret := make([]slowPrefixEntry[int], 0, len(pfxs))
for pfx := range pfxs {
ret = append(ret, slowPrefixEntry[int]{pfx, ptr.To(rand.Int())})
ret = append(ret, slowPrefixEntry[int]{pfx, rand.Int()})
}
return ret
@@ -1197,7 +1185,7 @@ func randomPrefixes6(n int) []slowPrefixEntry[int] {
ret := make([]slowPrefixEntry[int], 0, len(pfxs))
for pfx := range pfxs {
ret = append(ret, slowPrefixEntry[int]{pfx, ptr.To(rand.Int())})
ret = append(ret, slowPrefixEntry[int]{pfx, rand.Int()})
}
return ret
@@ -1230,14 +1218,6 @@ func randomAddr6() netip.Addr {
return netip.AddrFrom16(b)
}
// printIntPtr returns *v as a string, or the literal "<nil>" if v is nil.
func printIntPtr(v *int) string {
if v == nil {
return "<nil>"
}
return fmt.Sprint(*v)
}
// roundFloat64 rounds f to 2 decimal places, for display.
//
// It round-trips through a float->string->float conversion, so should not be

View File

@@ -12,11 +12,11 @@ import (
"net"
"net/netip"
"runtime"
"slices"
"strings"
"sync/atomic"
"time"
"golang.org/x/exp/slices"
"tailscale.com/health"
"tailscale.com/net/dns/resolver"
"tailscale.com/net/netmon"

View File

@@ -10,11 +10,11 @@ import (
"fmt"
"net"
"net/netip"
"slices"
"strings"
"time"
"github.com/miekg/dns"
"golang.org/x/exp/slices"
"tailscale.com/envknob"
"tailscale.com/net/netns"
"tailscale.com/types/logger"

View File

@@ -15,8 +15,9 @@ import (
"testing"
"time"
"slices"
"github.com/miekg/dns"
"golang.org/x/exp/slices"
"tailscale.com/envknob"
"tailscale.com/tstest"
)

View File

@@ -19,11 +19,11 @@ import (
"net/url"
"os"
"reflect"
"slices"
"sync/atomic"
"time"
"go4.org/netipx"
"golang.org/x/exp/slices"
"tailscale.com/atomicfile"
"tailscale.com/envknob"
"tailscale.com/net/dns/recursive"

Some files were not shown because too many files have changed in this diff Show More