Compare commits

...

348 Commits

Author SHA1 Message Date
Andrew Lytvynov
51774b3382 WIP: debug logs for app connector debugging 2025-01-09 15:36:35 -08:00
Andrew Dunham
6ddeae7556 types/views: optimize SliceEqualAnyOrderFunc for small slices
If the total number of differences is less than a small amount, just do
the dumb quadratic thing and compare every single object instead of
allocating a map.

Updates tailscale/corp#25479

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8931b4355a2da4ec0f19739927311cf88711a840
2025-01-09 17:10:36 -05:00
Andrew Dunham
7fa07f3416 types/views: add SliceEqualAnyOrderFunc
Extracted from some code written in the other repo.

Updates tailscale/corp#25479

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I6df062fdffa1705524caa44ac3b6f2788cf64595
2025-01-09 16:48:22 -05:00
Percy Wegmann
a51672cafd prober: record total bytes transferred in DERP bandwidth probes
This will enable Prometheus queries to look at the bandwidth over time windows,
for example 'increase(derp_bw_bytes_total)[1h] / increase(derp_bw_transfer_time_seconds_total)[1h]'.

Updates tailscale/corp#25503

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2025-01-09 09:22:44 -06:00
Irbe Krumina
68997e0dfa cmd/k8s-operator,k8s-operator: allow users to set custom labels for the optional ServiceMonitor (#14475)
* cmd/k8s-operator,k8s-operator: allow users to set custom labels for the optional ServiceMonitor

Updates tailscale/tailscale#14381

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-01-09 07:15:19 +00:00
Andrew Lytvynov
d8579a48b9 go.mod: bump go-git to v5.13.1 (#14584)
govulncheck flagged a couple fresh vulns in that package:
* https://pkg.go.dev/vuln/GO-2025-3367
* https://pkg.go.dev/vuln/GO-2025-3368

I don't believe these affect us, as we only do any git stuff from
release tooling which is all internal and with hardcoded repo URLs.

Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2025-01-08 12:44:49 -08:00
Mario Minardi
0b4ba4074f client/web: properly show "Log In" for web client on fresh install (#14569)
Change the type of the `IPv4` and `IPv6` members in the `nodeData`
struct to be `netip.Addr` instead of `string`.

We were previously calling `String()` on this struct, which returns
"invalid IP" when the `netip.Addr` is its zero value, and passing this
value into the aforementioned attributes.

This caused rendering issues on the frontend
as we were assuming that the value for `IPv4` and `IPv6` would be falsy
in this case.

The zero value for a `netip.Addr` marshalls to an empty string instead
which is the behaviour we want downstream.

Updates https://github.com/tailscale/tailscale/issues/14568

Signed-off-by: Mario Minardi <mario@tailscale.com>
2025-01-08 13:20:31 -07:00
Will Norris
fa52035574 client/systray: record that systray is running
Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2025-01-08 11:32:02 -08:00
Andrew Dunham
9f17260e21 types/views: add MapViewsEqual and MapViewsEqualFunc
Extracted from some code written in the other repo.

Updates tailscale/corp#25479

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I92c97a63a8f35cace6e89a730938ea587dcefd9b
2025-01-08 14:29:00 -05:00
Brad Fitzpatrick
1d4fd2fb34 hostinfo: improve accuracy of Linux desktop detection heuristic
DBus doesn't imply desktop.

Updates #1708

Change-Id: Id43205aafb293533119256adf372a7d762aa7aca
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-08 11:12:11 -08:00
Brad Fitzpatrick
8d6b996483 ipn/ipnlocal: add client metric gauge for number of IPNBus connections
Updates #1708

Change-Id: Ic7e28d692b4c48e78c842c26234b861fe42a916e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-08 10:59:25 -08:00
Percy Wegmann
c81a95dd53 prober: clone histogram buckets before handing to Prometheus for derp_qd_probe_delays_seconds
Updates tailscale/corp#25697

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2025-01-08 12:02:12 -06:00
Irbe Krumina
8d4ca13cf8 cmd/k8s-operator,k8s-operator: support ingress ProxyGroup type (#14548)
Currently this does not yet do anything apart from creating
the ProxyGroup resources like StatefulSet.

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-01-08 13:43:17 +00:00
KevinLiang10
009da8a364 ipn/ipnlocal: connect serve config to c2n endpoint
This commit updates the VIPService c2n endpoint on client to response with actual VIPService configuration stored
in the serve config.

Fixes tailscale/corp#24510
Signed-off-by: KevinLiang10 <37811973+KevinLiang10@users.noreply.github.com>
2025-01-07 16:15:07 -05:00
Will Norris
60daa2adb8 all: fix golangci-lint errors
These erroneously blocked a recent PR, which I fixed by simply
re-running CI. But we might as well fix them anyway.
These are mostly `printf` to `print` and a couple of `!=` to `!Equal()`

Updates #cleanup

Signed-off-by: Will Norris <will@tailscale.com>
2025-01-07 13:05:37 -08:00
James Tucker
de9d4b2f88 net/netmon: remove extra panic guard around ParseRIB
This was an extra defense added for #14201 that is no longer required.

Fixes #14201

Signed-off-by: James Tucker <james@tailscale.com>
2025-01-07 12:31:17 -08:00
Brad Fitzpatrick
220dc56f01 go.mod: bump tailscale/wireguard-go for Solaris/Illumos
Updates #14565

Change-Id: Ifb88ab2ee1997c00c3d4316be04f6f4cc71b2cd3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-07 11:26:23 -08:00
James Tucker
2c07f5dfcd wgengine/magicsock: refactor maybeRebindOnError
Remove the platform specificity, it is unnecessary complexity.
Deduplicate repeated code as a result of reduced complexity.
Split out error identification code.
Update call-sites and tests.

Updates #14551
Updates tailscale/corp#25648

Signed-off-by: James Tucker <james@tailscale.com>
2025-01-07 10:46:37 -08:00
Andrea Gottardo
6db220b478 controlclient: do not set HTTPS port for any private coordination server IP (#14564)
Fixes tailscale/tailscale#14563

When creating a NoiseClient, ensure that if any private IP address is provided, with both an `http` scheme and an explicit port number, we do not ever attempt to use HTTPS. We were only handling the case of `127.0.0.1` and `localhost`, but `192.168.x.y` is a private IP as well. This uses the `netip` package to check and adds some logging in case we ever need to troubleshoot this.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2025-01-07 10:24:32 -08:00
James Tucker
f4f57b815b wgengine/magicsock: rebind on EPIPE/ECONNRESET
Observed in the wild some macOS machines gain broken sockets coming out
of sleep (we observe "time jumped", followed by EPIPE on sendto). The
cause of this in the platform is unclear, but the fix is clear: always
rebind if the socket is broken. This can also be created artificially on
Linux via `ss -K`, and other conditions or software on a system could
also lead to the same outcomes.

Updates tailscale/corp#25648

Signed-off-by: James Tucker <james@tailscale.com>
2025-01-07 10:02:35 -08:00
James Tucker
6e45a8304e cmd/derper: improve logging on derp mesh connect
Include the mesh log prefix in all mesh connection setup.

Updates tailscale/corp#25653

Signed-off-by: James Tucker <james@tailscale.com>
2025-01-07 09:47:07 -08:00
Brad Fitzpatrick
cc4aa435ef go.mod: bump github.com/tailscale/peercred for Solaris
This pulls in Solaris/Illumos-specific:

  https://github.com/tailscale/peercred/pull/10
  https://go-review.googlesource.com/c/sys/+/639755

Updates tailscale/peercred#10 (from @nshalman)

Change-Id: I8211035fdcf84417009da352927149d68905c0f1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-07 07:39:37 -08:00
Will Norris
b36984cb16 cmd/systray: add cmd/systray back as a small client/systray wrapper
Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2025-01-06 16:49:34 -08:00
Will Norris
82e99fcf84 client/systray: move cmd/systray to client/systray
Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2025-01-06 16:49:34 -08:00
Brad Fitzpatrick
041622c92f ipn/ipnlocal: move where auto exit node selection happens
In the process, because I needed it for testing, make all
LocalBackend-managed goroutines be accounted for. And then in tests,
verify they're no longer running during LocalBackend.Shutdown.

Updates tailscale/corp#19681

Change-Id: Iad873d4df7d30103a4a7863dfacf9e078c77e6a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-06 12:49:44 -08:00
Brad Fitzpatrick
07aae18bca ipn/ipnlocal, util/goroutines: track goroutines for tests, shutdown
Updates #14520
Updates #14517 (in that I pulled this out of there)

Change-Id: Ibc28162816e083fcadf550586c06805c76e378fc
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-06 12:35:44 -08:00
Brad Fitzpatrick
b90707665e tailcfg: remove unused User fields
Fixes #14542

Change-Id: Ifeb0f90c570c1b555af761161f79df75f18ae3f9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-06 12:00:49 -08:00
Brad Fitzpatrick
5da772c670 cmd/tailscale/cli: fix TestUpdatePrefs on macOS
It was failing about an unaccepted risk ("mac-app-connector") because
it was checking runtime.GOOS ("darwin") instead of the test's env.goos
string value ("linux", which doesn't have the warning).

Fixes #14544

Change-Id: I470d86a6ad4bb18e1dd99d334538e56556147835
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-06 10:46:57 -08:00
Brad Fitzpatrick
f13b2bce93 tailcfg: flesh out docs
Updates #cleanup
Updates #14542

Change-Id: I41f7ce69d43032e0ba3c866d9c89d2a7eccbf090
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-06 09:25:32 -08:00
Brad Fitzpatrick
2fb361a3cf ipn: declare NotifyWatchOpt consts without using iota
Updates #cleanup
Updates #1909 (noticed while working on that)

Change-Id: I505001e5294287ad2a937b4db61d9e67de70fa14
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-04 18:43:27 -08:00
Marc Paquette
36ea792f06 Fix various linting, vet & static check issues
Fixes #14492

-----

Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.

Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

Change-Id: I6dc1068d34bbfa7477e7b7a56a4325b3868c92e1
Signed-off-by: Marc Paquette <marcphilippaquette@gmail.com>
2025-01-04 15:11:10 -08:00
Marc Paquette
60930d19c0 Update README to reference correct Commit Style URL
Change-Id: I2981c685a8905ad58536a8d9b01511d04c3017d1
Signed-off-by: Marc Paquette <marcphilippaquette@gmail.com>
2025-01-04 15:11:10 -08:00
Brad Fitzpatrick
2b8f02b407 ipn: convert ServeConfig Range methods to iterators
These were the last two Range funcs in this repo.

Updates #12912

Change-Id: I6ba0a911933cb5fc4e43697a9aac58a8035f9622
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-04 14:27:31 -08:00
Brad Fitzpatrick
4b56bf9039 types/views: remove various Map Range funcs; use iterators everywhere
The remaining range funcs in the tree are RangeOverTCPs and
RangeOverWebs in ServeConfig; those will be cleaned up separately.

Updates #12912

Change-Id: Ieeae4864ab088877263c36b805f77aa8e6be938d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-04 13:35:27 -08:00
Brad Fitzpatrick
47bd0723a0 all: use iterators in more places instead of Range funcs
And misc cleanup along the way.

Updates #12912

Change-Id: I0cab148b49efc668c6f5cdf09c740b84a713e388
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-04 11:01:00 -08:00
Joe Tsai
ad8d8e37de go.mod: update github.com/go-json-experiment/json (#14522)
Updates tailscale/corp#11038

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2025-01-03 16:01:20 -08:00
Brad Fitzpatrick
402fc9d65f control/controlclient: remove optimization that was more convoluted than useful
While working on #13390, I ran across this non-idiomatic
pointer-to-view and parallel-sorted-map accounting code that was all
just to avoid a sort later.

But the sort later when building a new netmap.NetworkMap is already a
drop in the bucket of CPU compared to how much work & allocs
mapSession.netmap and LocalBackend's spamming of the full netmap
(potentially tens of thousands of peers, MBs of JSON) out to IPNBus
clients for any tiny little change (node changing online status, etc).

Removing the parallel sorted slice let everything be simpler to reason
about, so this does that. The sort might take a bit more CPU time now
in theory, but in practice for any netmap size for which it'd matter,
the quadratic netmap IPN bus spam (which we need to fix soon) will
overshadow that little sort.

Updates #13390
Updates #1909

Change-Id: I3092d7c67dc10b2a0f141496fe0e7e98ccc07712
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-03 11:09:23 -08:00
Brad Fitzpatrick
1e2e319e7d util/slicesx: add MapKeys and MapValues from golang.org/x/exp/maps
Importing the ~deprecated golang.org/x/exp/maps as "xmaps" to not
shadow the std "maps" was getting ugly.

And using slices.Collect on an iterator is verbose & allocates more.

So copy (x)maps.Keys+Values into our slicesx package instead.

Updates #cleanup
Updates #12912
Updates #14514 (pulled out of that change)

Change-Id: I5e68d12729934de93cf4a9cd87c367645f86123a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-03 10:48:31 -08:00
Jason Barnett
17b881538a wgengine/router: refactor udm-pro into broader ubnt support
Fixes #14453

Signed-off-by: Jason Barnett <J@sonBarnett.com>
2025-01-03 13:06:16 -05:00
Brad Fitzpatrick
e3bcb2ec83 ipn/ipnlocal: use context.CancelFunc type for doc clarity
Using context.CancelFunc as the type (instead of func()) answers
questions like whether it's okay to call it multiple times, whether
it blocks, etc. And that's the type it actually is in this case.

Updates #cleanup

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-03 08:59:53 -08:00
Brad Fitzpatrick
03b9361f47 ipn: update reference to Notify's Swift definition
Updates #cleanup

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-01-03 08:59:45 -08:00
Brad Fitzpatrick
ff095606cc all: add means to set device posture attributes from node
Updates tailscale/corp#24690
Updates #4077

Change-Id: I05fe799beb1d2a71d1ec3ae08744cc68bcadae2a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-31 12:57:23 -08:00
Erisa A
30d3e7b242 scripts/install.sh: add special case for Parrot Security (#14487)
Their `os-release` doesn't follow convention.
Fixes #10778

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-12-30 17:22:48 +00:00
Will Norris
c43c5ca003 cmd/systray: properly set tooltip on different platforms
On Linux, systray.SetTitle actually seems to set the tooltip on all
desktops I've tested on.  But on macOS, it actually does set a title
that is always displayed in the systray area next to the icon. This
change should properly set the tooltip across platforms.

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-27 12:45:51 -08:00
Will Norris
5a4148e7e8 cmd/systray: update state management and initialization
Move a number of global state vars into the Menu struct, keeping things
better encapsulated. The systray package still relies on its own global
state, so only a single Menu instance can run at a time.

Move a lot of the initialization logic out of onReady, in particular
fetching the latest tailscale state. Instead, populate the state before
calling systray.Run, which fixes a timing issue in GNOME (#14477).

This change also creates a separate bgContext for actions not tied menu
item clicks. Because we have to rebuild the entire menu regularly, we
cancel that context as needed, which can cancel subsequent updateState
calls.

Also exit cleanly on SIGINT and SIGTERM.

Updates #1708
Fixes #14477

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-27 11:05:26 -08:00
Will Norris
86f273d930 cmd/systray: set app icon and title consistently
Refactor code to set app icon and title as part of rebuild, rather than
separately in eventLoop. This fixes several cases where they weren't
getting updated properly. This change also makes use of the new exit
node icons.

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-23 17:43:44 -08:00
Will Norris
2bdbe5b2ab cmd/systray: add icons for exit node online and offline
restructure tsLogo to allow setting a mask to be used when drawing the
logo dots, as well as add an overlay icon, such as the arrow when
connected to an exit node.

The icon is still renders as white on black, but this change also
prepare for doing a black on white version, as well a fully transparent
icon. I don't know if we can consistently determine which to use, so
this just keeps the single icon for now.

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-23 17:43:44 -08:00
James Tucker
68b12a74ed metrics,syncs: add ShardedInt support to metrics.LabelMap
metrics.LabelMap grows slightly more heavy, needing a lock to ensure
proper ordering for newly initialized ShardedInt values. An Add method
enables callers to use .Add for both expvar.Int and syncs.ShardedInt
values, but retains the original behavior of defaulting to initializing
expvar.Int values.

Updates tailscale/corp#25450

Co-Authored-By: Andrew Dunham <andrew@du.nham.ca>
Signed-off-by: James Tucker <james@tailscale.com>
2024-12-23 13:10:18 -08:00
Erisa A
72b278937b scripts/installer.sh: allow CachyOS for Arch packages (#14464)
Fixes #13955

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-12-23 17:53:06 +00:00
Will Norris
3837b6cebc cmd/systray: rebuild menu on pref change, assorted other fixes
- rebuild menu when prefs change outside of systray, such as setting an
  exit node
- refactor onClick handler code
- compare lowercase country name, the same as macOS and Windows (now
  sorts Ukraine before USA)
- fix "connected / disconnected" menu items on stopped status
- prevent nil pointer on "This Device" menu item

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-23 09:01:30 -08:00
Erisa A
76ca1adc64 scripts/installer.sh: accept different capitalisation of deepin (#14463)
Newer Deepin Linux versions use `deepin` as their ID, older ones used `Deepin`.

Fixes #13570

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-12-23 16:47:55 +00:00
Brad Fitzpatrick
9e2819b5d4 util/stringsx: add package for extra string functions, like CompareFold
Noted as useful during review of #14448.

Updates #14457

Change-Id: I0f16f08d5b05a8e9044b19ef6c02d3dab497f131
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-23 07:43:56 -08:00
Erisa A
4267d0fc5b .github: update matrix of installer.sh tests (#14462)
Remove EOL Ubuntu versions.
Add new Ubuntu LTS.
Update Alpine to test latest version.

Also, make the test run when its workflow is updated and installer.sh isn't.

Updates #cleanup

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-12-23 14:48:35 +00:00
Erisa A
c4f9f955ab scripts/installer.sh: add support for PikaOS (#14461)
Fixes #14460

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-12-23 12:53:54 +00:00
Jason Barnett
8d4ea4d90c wgengine/router: add ip rules for unifi udm-pro
Fixes: #4038

Signed-off-by: Jason Barnett <J@sonBarnett.com>
2024-12-21 11:47:20 -05:00
Will Norris
10d4057a64 cmd/systray: add visual workarounds for gnome, mac, and windows
Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-20 17:57:42 -08:00
Will Norris
cb59943501 cmd/systray: add exit nodes menu
This commit builds the exit node menu including the recommended exit
node, if available, as well as tailnet and mullvad exit nodes.

This does not yet update the menu based on changes in exit node outside
of the systray app, which will come later.  This also does not include
the ability to run as an exit node.

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-20 17:32:48 -08:00
Naman Sood
887472312d tailcfg: rename and retype ServiceHost capability (#14380)
* tailcfg: rename and retype ServiceHost capability, add value type

Updates tailscale/corp#22743.

In #14046, this was accidentally made a PeerCapability when it
should have been NodeCapability. Also, renaming it to use the
nomenclature that we decided on after #14046 went up, and adding
the type of the value that will be passed down in the RawMessage
for this capability.

This shouldn't break anything, since no one was using this string or
variable yet.

Signed-off-by: Naman Sood <mail@nsood.in>
2024-12-20 15:57:46 -05:00
Will Norris
256da8dfb5 cmd/systray: remove new menu delay on KDE
The new menu delay added to fix libdbusmenu systrays causes problems
with KDE. Given the state of wildly varying systray implementations, I
suspect we may need more desktop-specific hacks, so I'm setting this up
to accommodate that.

Updates #1708
Updates #14431

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-20 10:12:07 -08:00
Percy Wegmann
5095efd628 prober: make histogram buckets cumulative
Histogram buckets should include counts for all values under the bucket ceiling,
not just those between the ceiling and the next lower ceiling.

See https://prometheus.io/docs/tutorials/understanding_metric_types/\#histogram

Updates tailscale/corp#24522

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-12-20 10:28:37 -06:00
Tom Proctor
3adad364f1 cmd/k8s-operator,k8s-operator: include top-level CRD descriptions (#14435)
When reading https://doc.crds.dev/github.com/tailscale/tailscale/tailscale.com/ProxyGroup/v1alpha1@v1.78.3
I noticed there is no top-level description for ProxyGroup and Recorder. Add
one to give some high-level direction.

Updates #cleanup

Change-Id: I3666c5445be272ea5a1d4d02b6d5ad4c23afb09f

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-20 16:12:56 +00:00
Will Norris
89adcd853d cmd/systray: improve profile menu
Bring UI closer to macOS and windows:
- split login and tailnet name over separate lines
- render profile picture (with very simple caching)
- use checkbox to indicate active profile. I've not found any desktops
  that can't render checkboxes, so I'd like to explore other options
  if needed.

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-19 15:23:02 -08:00
James Tucker
e8f1721147 syncs: add ShardedInt expvar.Var type
ShardedInt provides an int type expvar.Var that supports more efficient
writes at high frequencies (one order of magnigude on an M1 Max, much
more on NUMA systems).

There are two implementations of ShardValue, one that abuses sync.Pool
that will work on current public Go versions, and one that takes a
dependency on a runtime.TailscaleP function exposed in Tailscale's Go
fork. The sync.Pool variant has about 10x the throughput of a single
atomic integer on an M1 Max, and the runtime.TailscaleP variant is about
10x faster than the sync.Pool variant.

Neither variant have perfect distribution, or perfectly always avoid
cross-CPU sharing, as there is no locking or affinity to ensure that the
time of yield is on the same core as the time of core biasing, but in
the average case the distributions are enough to provide substantially
better performance.

See golang/go#18802 for a related upstream proposal.

Updates tailscale/go#109
Updates tailscale/corp#25450

Signed-off-by: James Tucker <james@tailscale.com>
2024-12-19 14:58:28 -08:00
Will Norris
2d4edd80f1 cmd/systray: add extra padding around notification icon
Some notification managers crop the application icon to a circle, so
ensure we have enough padding to account for that.

Updates #1708

Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
2024-12-19 13:31:54 -08:00
Percy Wegmann
00a4504cf1 cmd/derpprobe,prober: add ability to perform continuous queuing delay measurements against DERP servers
This new type of probe sends DERP packets sized similarly to CallMeMaybe packets
at a rate of 10 packets per second. It records the round-trip times in a Prometheus
histogram. It also keeps track of how many packets are dropped. Packets that fail to
arrive within 5 seconds are considered dropped.

Updates tailscale/corp#24522

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-12-19 10:45:56 -06:00
Andrew Lytvynov
6ae0287a57 cmd/systray: add account switcher
Updates #1708

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-12-19 08:26:17 -08:00
Joe Tsai
ff5b4bae99 syncs: add MutexValue (#14422)
MutexValue is simply a value guarded by a mutex.
For any type that is not pointer-sized,
MutexValue will perform much better than AtomicValue
since it will not incur an allocation boxing the value
into an interface value (which is how Go's atomic.Value
is implemented under-the-hood).

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-12-18 17:11:22 -08:00
Tom Proctor
b3d4ffe168 docs/k8s: add some high-level operator architecture diagrams (#13915)
This is an experiment to see how useful we will find it to have some
text-based diagrams to document how various components of the operator
work. There are no plans to link to this from elsewhere yet, but
hopefully it will be a useful reference internally.

Updates #cleanup

Change-Id: If5911ed39b09378fec0492e87738ec0cc3d8731e
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-17 15:36:57 +00:00
Joe Tsai
b62a013ecb Switch logging service from log.tailscale.io to log.tailscale.com (#14398)
Updates tailscale/corp#23617

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-12-16 14:53:34 -08:00
Brad Fitzpatrick
2506b81471 prober: fix WithBandwidthProbing behavior with optional tunAddress
1ed9bd76d6 meant to make tunAddress be optional.

Updates tailscale/corp#24635

Change-Id: Idc4a8540b294e480df5bd291967024c04df751c0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-16 12:18:54 -08:00
Brad Fitzpatrick
0cc2a8dc0d go.toolchain.rev: bump Go toolchain
For https://github.com/tailscale/go/pull/108 so we can depend on it in
other repos. (This repo can't yet use it; we permit building
tailscale/tailscale with the latest stock Go release) But that will be
in Go 1.24. We're just impatient elsewhere and would like it in the
control plane code earlier.

Updates tailscale/corp#25406

Change-Id: I53ff367318365c465cbd02cea387c8ff1eb49fab
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-16 11:26:32 -08:00
Joe Tsai
5883ca72a7 types/opt: fix test to be agnostic to omitzero support (#14401)
The omitzero tag option has been backported to v1 "encoding/json"
from the "encoding/json/v2" prototype and will land in Go1.24.
Until we fully upgrade to Go1.24, adjust the test to be agnostic
to which version of Go someone is using.

Updates tailscale/corp#25406

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-12-16 10:56:55 -08:00
Irbe Krumina
cc168d9f6b cmd/k8s-operator: fix ProxyGroup hostname (#14336)
Updates tailscale/tailscale#14325

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-16 06:11:18 +00:00
Percy Wegmann
1ed9bd76d6 prober: perform DERP bandwidth probes over TUN device to mimic real client
Updates tailscale/corp#24635

Co-authored-by: Mario Minardi <mario@tailscale.com>
Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-12-13 15:50:47 -06:00
James Tucker
aa04f61d5e net/netcheck: adjust HTTPS latency check to connection time and avoid data race
The go-httpstat package has a data race when used with connections that
are performing happy-eyeballs connection setups as we are in the DERP
client. There is a long-stale PR upstream to address this, however
revisiting the purpose of this code suggests we don't really need
httpstat here.

The code populates a latency table that may be used to compare to STUN
latency, which is a lightweight RTT check. Switching out the reported
timing here to simply the request HTTP request RTT avoids the
problematic package.

Fixes tailscale/corp#25095

Signed-off-by: James Tucker <james@tailscale.com>
2024-12-13 12:53:10 -08:00
Brad Fitzpatrick
73128e2523 ssh/tailssh: remove unused public key support
When we first made Tailscale SSH, we assumed people would want public
key support soon after. Turns out that hasn't been the case; people
love the Tailscale identity authentication and check mode.

In light of CVE-2024-45337, just remove all our public key code to not
distract people, and to make the code smaller. We can always get it
back from git if needed.

Updates tailscale/corp#25131
Updates golang/go#70779

Co-authored-by: Percy Wegmann <percy@tailscale.com>
Change-Id: I87a6e79c2215158766a81942227a18b247333c22
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-12 11:16:55 -08:00
Adrian Dewhurst
716cb37256 util/dnsname: use vizerror for all errors
The errors emitted by util/dnsname are all written at least moderately
friendly and none of them emit sensitive information. They should be
safe to display to end users.

Updates tailscale/corp#9025

Change-Id: Ic58705075bacf42f56378127532c5f28ff6bfc89
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
2024-12-12 10:29:36 -05:00
Joe Tsai
c9188d7760 types/bools: add IfElse (#14272)
The IfElse function is equivalent to the ternary (c ? a : b) operator
in many other languages like C. Unfortunately, this function
cannot perform short-circuit evaluation like in many other languages,
but this is a restriction that's not much different
than the pre-existing cmp.Or function.

The argument against ternary operators in Go is that
nested ternary operators become unreadable
(e.g., (c1 ? (c2 ? a : b) : (c2 ? x : y))).
But a single layer of ternary expressions can sometimes
make code much more readable.

Having the bools.IfElse function gives code authors the
ability to decide whether use of this is more readable or not.
Obviously, code authors will need to be judicious about
their use of this helper function.
Readability is more of an art than a science.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-12-11 10:55:33 -08:00
Joe Tsai
0045860060 types/iox: add function types for Reader and Writer (#14366)
Throughout our codebase we have types that only exist only
to implement an io.Reader or io.Writer, when it would have been
simpler, cleaner, and more readable to use an inlined function literal
that closes over the relevant types.

This is arguably more readable since it keeps the semantic logic
in place rather than have it be isolated elsewhere.

Note that a function literal that closes over some variables
is semantic equivalent to declaring a struct with fields and
having the Read or Write method mutate those fields.

Updates #cleanup

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-12-11 10:55:21 -08:00
Irbe Krumina
6e552f66a0 cmd/containerboot: don't attempt to patch a Secret field without permissions (#14365)
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-11 14:58:44 +00:00
Tom Proctor
f1ccdcc713 cmd/k8s-operator,k8s-operator: operator integration tests (#12792)
This is the start of an integration/e2e test suite for the tailscale operator.
It currently only tests two major features, ingress proxy and API server proxy,
but we intend to expand it to cover more features over time. It also only
supports manual runs for now. We intend to integrate it into CI checks in a
separate update when we have planned how to securely provide CI with the secrets
required for connecting to a test tailnet.

Updates #12622

Change-Id: I31e464bb49719348b62a563790f2bc2ba165a11b
Co-authored-by: Irbe Krumina <irbe@tailscale.com>
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-11 14:48:57 +00:00
Irbe Krumina
fa655e6ed3 cmd/containerboot: add more tests, check that egress service config only set on kube (#14360)
Updates tailscale/tailscale#14357

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-11 12:59:42 +00:00
Irbe Krumina
0cc071f154 cmd/containerboot: don't attempt to write kube Secret in non-kube environments (#14358)
Updates tailscale/tailscale#14354

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-11 10:56:12 +00:00
Bjorn Neergaard
8b1d01161b cmd/containerboot: guard kubeClient against nil dereference (#14357)
A method on kc was called unconditionally, even if was not initialized,
leading to a nil pointer dereference when TS_SERVE_CONFIG was set
outside Kubernetes.

Add a guard symmetric with other uses of the kubeClient.

Fixes #14354.

Signed-off-by: Bjorn Neergaard <bjorn@neersighted.com>
2024-12-11 09:52:56 +00:00
dependabot[bot]
d54cd59390 .github: Bump github/codeql-action from 3.27.1 to 3.27.6 (#14332)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.27.1 to 3.27.6.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](4f3212b617...aa57810251)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-10 15:15:11 -07:00
dependabot[bot]
fa28b024d6 .github: Bump actions/cache from 4.1.2 to 4.2.0 (#14331)
Bumps [actions/cache](https://github.com/actions/cache) from 4.1.2 to 4.2.0.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](6849a64899...1bd1e32a3b)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-10 14:32:04 -07:00
Mario Minardi
ea3d0bcfd4 prober,derp/derphttp: make dev-mode DERP probes work without TLS (#14347)
Make dev-mode DERP probes work without TLS. Properly dial port `3340`
when not using HTTPS when dialing nodes in `derphttp_client`. Skip
verifying TLS state in `newConn` if we are not running a prober.

Updates tailscale/corp#24635

Signed-off-by: Percy Wegmann <percy@tailscale.com>
Co-authored-by: Percy Wegmann <percy@tailscale.com>
2024-12-10 10:51:03 -07:00
Mike O'Driscoll
24b243c194 derp: add env var setting server send queue depth (#14334)
Use envknob to configure the per client send
queue depth for the derp server.

Fixes tailscale/corp#24978

Signed-off-by: Mike O'Driscoll <mikeo@tailscale.com>
2024-12-10 08:58:27 -05:00
Tom Proctor
06c5e83c20 hostinfo: fix testing in container (#14330)
Previously this unit test failed if it was run in a container. Update the assert
to focus on exactly the condition we are trying to assert: the package type
should only be 'container' if we use the build tag.

Updates #14317

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-09 20:42:10 +00:00
Mike O'Driscoll
c2761162a0 cmd/stunc: enforce read timeout deadline (#14309)
Make argparsing use flag for adding a new
parameter that requires parsing.

Enforce a read timeout deadline waiting for response
from the stun server provided in the args. Otherwise
the program will never exit.

Fixes #14267

Signed-off-by: Mike O'Driscoll <mikeo@tailscale.com>
2024-12-06 14:27:52 -05:00
Nick Khyl
f817860079 VERSION.txt: this is v1.79.0
Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-12-06 11:25:12 -06:00
Percy Wegmann
06a82f416f cmd,{get-authkey,tailscale}: remove unnecessary scope qualifier from OAuth clients
OAuth clients that were used to generate an auth_key previously
specified the scope 'device'. 'device' is not an actual scope,
the real scope is 'devices'. The resulting OAuth token ended up
including all scopes from the specified OAuth client, so the code
was able to successfully create auth_keys.

It's better not to hardcode a scope here anyway, so that we have
the flexibility of changing which scope(s) are used in the future
without having to update old clients.

Since the qualifier never actually did anything, this commit simply
removes it.

Updates tailscale/corp#24934

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-12-06 09:29:07 -06:00
Brad Fitzpatrick
dc6728729e health: fix TestHealthMetric to pass on release branch
Fixes #14302

Change-Id: I9fd893a97711c72b713fe5535f2ccb93fadf7452
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-05 15:50:56 -08:00
Joe Tsai
a482dc037b logpolicy: cleanup options API and allow setting http.Client (#11503)
This package grew organically over time and
is an awful mix of explicitly declared options and
globally set parameters via environment variables and
other subtle effects.

Add a new Options and TransportOptions type to
allow for the creation of a Policy or http.RoundTripper
with some set of options.
The options struct avoids the need to add yet more
NewXXX functions for every possible combination of
ordered arguments.

The goal of this refactor is to allow specifying the http.Client
to use with the Policy.

Updates tailscale/corp#18177

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-12-05 15:50:24 -08:00
Andrew Lytvynov
66aa774167 cmd/gitops-pusher: default previousEtag to controlEtag (#14296)
If previousEtag is empty, then we assume control ACLs were not modified
manually and push the local ACLs. Instead, we defaulted to localEtag
which would be different if local ACLs were different from control.

AFAIK this was always buggy, but never reported?

Fixes #14295

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-12-05 15:00:54 -08:00
James Tucker
b37a478cac go.mod: bump x/net and dependencies
Pulling in upstream fix for #14201.

Updates #14201

Signed-off-by: James Tucker <james@tailscale.com>
2024-12-05 14:35:15 -08:00
Brad Fitzpatrick
87546a5edf cmd/derper: allow absent SNI when using manual certs and IP literal for hostname
Updates #11776

Change-Id: I81756415feb630da093833accc3074903ebd84a7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-05 09:56:48 -08:00
Irbe Krumina
614c612643 net/netcheck: preserve STUN port defaulting to 3478 (#14289)
Updates tailscale/tailscale#14287

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-05 13:21:03 +00:00
Tom Proctor
df94a14870 cmd/k8s-operator: don't error for transient failures (#14073)
Every so often, the ProxyGroup and other controllers lose an optimistic locking race
with other controllers that update the objects they create. Stop treating
this as an error event, and instead just log an info level log line for it.

Fixes #14072

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-05 12:11:22 +00:00
James Tucker
7f9ebc0a83 cmd/tailscale,net/netcheck: add debug feature to force preferred DERP
This provides an interface for a user to force a preferred DERP outcome
for all future netchecks that will take precedence unless the forced
region is unreachable.

The option does not persist and will be lost when the daemon restarts.

Updates tailscale/corp#18997
Updates tailscale/corp#24755

Signed-off-by: James Tucker <james@tailscale.com>
2024-12-04 16:52:56 -08:00
Brad Fitzpatrick
74069774be net/tstun: remove tailscaled_outbound_dropped_packets_total reason=acl metric for now
Updates #14280

Change-Id: Idff102b3d7650fc9dfbe0c340168806bdf542d76
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-04 08:55:54 -08:00
Irbe Krumina
2aac916888 cmd/{containerboot,k8s-operator},kube/kubetypes: kube Ingress L7 proxies only advertise HTTPS endpoint when ready (#14171)
cmd/containerboot,kube/kubetypes,cmd/k8s-operator: detect if Ingress is created in a tailnet that has no HTTPS

This attempts to make Kubernetes Operator L7 Ingress setup failures more explicit:
- the Ingress resource now only advertises HTTPS endpoint via status.ingress.loadBalancer.hostname when/if the proxy has succesfully loaded serve config
- the proxy attempts to catch cases where HTTPS is disabled for the tailnet and logs a warning

Updates tailscale/tailscale#12079
Updates tailscale/tailscale#10407

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-04 12:00:04 +00:00
Irbe Krumina
aa43388363 cmd/k8s-operator: fix a bunch of status equality checks (#14270)
Updates tailscale/tailscale#14269

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-04 06:46:51 +00:00
Oliver Rahner
cbf1a4efe9 cmd/k8s-operator/deploy/chart: allow reading OAuth creds from a CSI driver's volume and annotating operator's Service account (#14264)
cmd/k8s-operator/deploy/chart: allow reading OAuth creds from a CSI driver's volume and annotating operator's Service account

Updates #14264

Signed-off-by: Oliver Rahner <o.rahner@dke-data.com>
2024-12-03 17:00:40 +00:00
Tom Proctor
efdfd54797 cmd/k8s-operator: avoid port collision with metrics endpoint (#14185)
When the operator enables metrics on a proxy, it uses the port 9001,
and in the near future it will start using 9002 for the debug endpoint
as well. Make sure we don't choose ports from a range that includes
9001 so that we never clash. Setting TS_SOCKS5_SERVER, TS_HEALTHCHECK_ADDR_PORT,
TS_OUTBOUND_HTTP_PROXY_LISTEN, and PORT could also open arbitrary ports,
so we will need to document that users should not choose ports from the
10000-11000 range for those settings.

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-03 15:02:42 +00:00
Irbe Krumina
9f9063e624 cmd/k8s-operator,k8s-operator,go.mod: optionally create ServiceMonitor (#14248)
* cmd/k8s-operator,k8s-operator,go.mod: optionally create ServiceMonitor

Adds a new spec.metrics.serviceMonitor field to ProxyClass.
If that's set to true (and metrics are enabled), the operator
will create a Prometheus ServiceMonitor for each proxy to which
the ProxyClass applies.
Additionally, create a metrics Service for each proxy that has
metrics enabled.

Updates tailscale/tailscale#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-03 12:35:25 +00:00
Irbe Krumina
eabb424275 cmd/k8s-operator,docs/k8s: run tun mode proxies in privileged containers (#14262)
We were previously relying on unintended behaviour by runc where
all containers where by default given read/write/mknod permissions
for tun devices.
This behaviour was removed in https://github.com/opencontainers/runc/pull/3468
and released in runc 1.2.
Containerd container runtime, used by Docker and majority of Kubernetes distributions
bumped runc to 1.2 in 1.7.24 https://github.com/containerd/containerd/releases/tag/v1.7.24
thus breaking our reference tun mode Tailscale Kubernetes manifests and Kubernetes
operator proxies.

This PR changes the all Kubernetes container configs that run Tailscale in tun mode
to privileged. This should not be a breaking change because all these containers would
run in a Pod that already has a privileged init container.

Updates tailscale/tailscale#14256
Updates tailscale/tailscale#10814

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-12-03 07:01:14 +00:00
KevinLiang10
3f54572539 IPN: Update ServeConfig to accept configuration for Services.
This commit updates ServeConfig to allow configuration to Services (VIPServices for now) via Serve.
The scope of this commit is only adding the Services field to ServeConfig. The field doesn't actually
allow packet flowing yet. The purpose of this commit is to unblock other work on k8s end.

Updates #22953

Signed-off-by: KevinLiang10 <37811973+KevinLiang10@users.noreply.github.com>
2024-12-02 17:35:31 -05:00
Brad Fitzpatrick
8d0c690f89 net/netcheck: clean up ICMP probe AddrPort lookup
Fixes #14200

Change-Id: Ib086814cf63dda5de021403fe1db4fb2a798eaae
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-12-02 09:28:00 -08:00
Tom Proctor
24095e4897 cmd/containerboot: serve health on local endpoint (#14246)
* cmd/containerboot: serve health on local endpoint

We introduced stable (user) metrics in #14035, and `TS_LOCAL_ADDR_PORT`
with it. Rather than requiring users to specify a new addr/port
combination for each new local endpoint they want the container to
serve, this combines the health check endpoint onto the local addr/port
used by metrics if `TS_ENABLE_HEALTH_CHECK` is used instead of
`TS_HEALTHCHECK_ADDR_PORT`.

`TS_LOCAL_ADDR_PORT` now defaults to binding to all interfaces on 9002
so that it works more seamlessly and with less configuration in
environments other than Kubernetes, where the operator always overrides
the default anyway. In particular, listening on localhost would not be
accessible from outside the container, and many scripted container
environments do not know the IP address of the container before it's
started. Listening on all interfaces allows users to just set one env
var (`TS_ENABLE_METRICS` or `TS_ENABLE_HEALTH_CHECK`) to get a fully
functioning local endpoint they can query from outside the container.

Updates #14035, #12898

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-12-02 12:18:09 +00:00
Brad Fitzpatrick
a68efe2088 cmd/checkmetrics: add command for checking metrics against kb
This commit adds a command to validate that all the metrics that
are registring in the client are also present in a path or url.

It is intended to be ran from the KB against the latest version of
tailscale.

Updates tailscale/corp#24066
Updates tailscale/corp#22075

Co-Authored-By: Brad Fitzpatrick <bradfitz@tailscale.com>
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-12-02 10:30:46 +01:00
Irbe Krumina
13faa64c14 cmd/k8s-operator: always set stateful filtering to false (#14216)
Updates tailscale/tailscale#12108

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-29 15:44:58 +00:00
Irbe Krumina
44c8892c18 Makefile,./build_docker.sh: update kube operator image build target name (#14251)
Updates tailscale/corp#24540
Updates tailscale/tailscale#12914

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-29 15:32:18 +00:00
Irbe Krumina
f8587e321e cmd/k8s-operator: fix port name change bug for egress ProxyGroup proxies (#14247)
Ensure that the ExternalName Service port names are always synced to the
ClusterIP Service, to fix a bug where if users created a Service with
a single unnamed port and later changed to 1+ named ports, the operator
attempted to apply an invalid multi-port Service with an unnamed port.
Also, fixes a small internal issue where not-yet Service status conditons
were lost on a spec update.

Updates tailscale/tailscale#10102

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-29 10:37:25 +00:00
Kristoffer Dalby
61dd2662ec tsnet: remove flaky test marker from metrics
Updates #13420

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-28 15:00:26 +01:00
Kristoffer Dalby
caba123008 wgengine/magicsock: packet/bytes metrics should not count disco
Updates #13420

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-28 15:00:26 +01:00
Kristoffer Dalby
225d8f5a88 tsnet: validate sent data in metrics test
Updates #13420

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-28 15:00:26 +01:00
Kristoffer Dalby
e55899386b tsnet: split bytes and routes metrics tests
Updates #13420

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-28 15:00:26 +01:00
Kristoffer Dalby
06d929f9ac tsnet: send less data in metrics integration test
this commit reduced the amount of data sent in the metrics
data integration test from 10MB to 1MB.

On various machines 10MB was quite flaky, while 1MB has not failed
once on 10000 runs.

Updates #13420

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-28 15:00:26 +01:00
Kristoffer Dalby
41e56cedf8 health: move health metrics test to health_test
Updates #13420

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-11-28 15:00:26 +01:00
Joe Tsai
bac3af06f5 logtail: avoid bytes.Buffer allocation (#11858)
Re-use a pre-allocated bytes.Buffer struct and
shallow the copy the result of bytes.NewBuffer into it
to avoid allocating the struct.

Note that we're only reusing the bytes.Buffer struct itself
and not the underling []byte temporarily stored within it.

Updates #cleanup
Updates tailscale/corp#18514
Updates golang/go#67004

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-11-27 11:18:04 -08:00
Anton Tolchanov
bb80f14ff4 ipn/localapi: count localapi requests to metric endpoints
Updates tailscale/corp#22075

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-11-27 09:25:06 +00:00
Andrew Dunham
e87b71ec3c control/controlhttp: set *health.Tracker in tests
Observed during another PR:
    https://github.com/tailscale/tailscale/actions/runs/12040045880/job/33569141807

Updates #cleanup

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I9e0f49a35485fa2e097892737e5e3c95bf775a90
2024-11-26 18:05:05 -05:00
Nick Khyl
a62f7183e4 cmd/tailscale/cli: fix format string
Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-26 16:11:46 -06:00
Mario Minardi
26de518413 ipn/ipnlocal: only check CanUseExitNode if we are attempting to use one (#14230)
In https://github.com/tailscale/tailscale/pull/13726 we added logic to
`checkExitNodePrefsLocked` to error out on platforms where using an
exit node is unsupported in order to give users more obvious feedback
than having this silently fail downstream.

The above change neglected to properly check whether the device in
question was actually trying to use an exit node when doing the check
and was incorrectly returning an error on any calls to
`checkExitNodePrefsLocked` on platforms where using an exit node is not
supported as a result.

This change remedies this by adding a check to see whether the device is
attempting to use an exit node before doing the `CanUseExitNode` check.

Updates https://github.com/tailscale/corp/issues/24835

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-11-26 10:45:03 -07:00
James Tucker
4d33f30f91 net/netmon: improve panic reporting from #14202
I was hoping we'd catch an example input quickly, but the reporter had
rebooted their machine and it is no longer exhibiting the behavior. As
such this code may be sticking around quite a bit longer and we might
encounter other errors, so include the panic in the log entry.

Updates #14201
Updates #14202
Updates golang/go#70528

Signed-off-by: James Tucker <james@tailscale.com>
2024-11-25 12:31:24 -08:00
Nick Khyl
788121f475 docs/windows/policy: update ADMX policy definitions to reflect the syspolicy settings
We add a policy definition for the AllowedSuggestedExitNodes syspolicy setting, allowing admins
to configure a list of exit node IDs to be used as a pool for automatic suggested exit node selection.

We update definitions for policy settings configurable on both a per-user and per-machine basis,
such as UI customizations, to specify class="Both".

Lastly, we update the help text for existing policy definitions to include a link to the KB article
as the last line instead of in the first paragraph.

Updates #12687
Updates tailscale/corp#19681

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-25 10:49:22 -06:00
Irbe Krumina
ba3523fc3f cmd/containerboot: preserve headers of metrics endpoints responses (#14204)
Updates tailscale/tailscale#11292

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-23 08:51:40 +00:00
James Tucker
f6431185b0 net/netmon: catch ParseRIB panic to gather buffer data
Updates #14201
Updates golang/go#70528

Signed-off-by: James Tucker <james@tailscale.com>
2024-11-22 14:56:06 -08:00
Nick Khyl
36b7449fea ipn/ipnlocal: rebuild allowed suggested exit nodes when syspolicy changes
In this PR, we update LocalBackend to rebuild the set of allowed suggested exit nodes whenever
the AllowedSuggestedExitNodes syspolicy setting changes. Additionally, we request a new suggested
exit node when this occurs, enabling its use if the ExitNodeID syspolicy setting is set to auto:any.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 15:01:45 -06:00
Nick Khyl
3353f154bb control/controlclient: use the most recent syspolicy.MachineCertificateSubject value
This PR removes the sync.Once wrapper around retrieving the MachineCertificateSubject policy
setting value, ensuring the most recent version is always used if it changes after the service starts.

Although this policy setting is used by a very limited number of customers, recent support escalations have highlighted issues caused by outdated or incorrect policy values being applied.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 14:50:32 -06:00
Nick Khyl
eb3cd32911 ipn/ipnlocal: update ipn.Prefs when there's a change in syspolicy settings
In this PR, we update ipnlocal.NewLocalBackend to subscribe to policy change notifications
and reapply syspolicy settings to the current profile's ipn.Prefs whenever a change occurs.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 14:41:39 -06:00
Nick Khyl
2ab66d9698 ipn/ipnlocal: move syspolicy handling from setExitNodeID to applySysPolicy
This moves code that handles ExitNodeID/ExitNodeIP syspolicy settings
from (*LocalBackend).setExitNodeID to applySysPolicy.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 14:41:39 -06:00
Nick Khyl
7c8f663d70 cmd/tailscaled: log SCM interactions if the policy setting is enabled at the time of interaction
This updates the syspolicy.LogSCMInteractions check to run at the time of an interaction,
just before logging a message, instead of during service startup. This ensures the most
recent policy setting is used if it has changed since the service started.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 14:37:38 -06:00
Nick Khyl
50bf32a0ba cmd/tailscaled: flush DNS if FlushDNSOnSessionUnlock is true upon receiving a session change notification
In this PR, we move the syspolicy.FlushDNSOnSessionUnlock check from service startup
to when a session change notification is received. This ensures that the most recent policy
setting value is used if it has changed since the service started.

We also plan to handle session change notifications for unrelated reasons
and need to decouple notification subscriptions from DNS anyway.

Updates #12687
Updates tailscale/corp#18342

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 14:37:22 -06:00
Nick Khyl
8e5cfbe4ab util/syspolicy/rsop: reduce policyReloadMinDelay and policyReloadMaxDelay when in tests
These delays determine how soon syspolicy change callbacks are invoked after a policy setting is updated
in a policy source. For tests, we shorten these delays to minimize unnecessary wait times. This adjustment
only affects tests that subscribe to policy change notifications and modify policy settings after they have
already been set. Initial policy settings are always available immediately without delay.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 09:51:21 -06:00
Nick Khyl
462e1fc503 ipn/{ipnlocal,localapi}, wgengine/netstack: call (*LocalBackend).Shutdown when tests that create them complete
We have several places where LocalBackend instances are created for testing, but they are rarely shut down
when the tests that created them exit.

In this PR, we update newTestLocalBackend and similar functions to use testing.TB.Cleanup(lb.Shutdown)
to ensure LocalBackend instances are properly shut down during test cleanup.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-11-22 09:46:11 -06:00
Tom Proctor
74d4652144 cmd/{containerboot,k8s-operator},k8s-operator: new options to expose user metrics (#14035)
containerboot:

Adds 3 new environment variables for containerboot, `TS_LOCAL_ADDR_PORT` (default
`"${POD_IP}:9002"`), `TS_METRICS_ENABLED` (default `false`), and `TS_DEBUG_ADDR_PORT`
(default `""`), to configure metrics and debug endpoints. In a follow-up PR, the
health check endpoint will be updated to use the `TS_LOCAL_ADDR_PORT` if
`TS_HEALTHCHECK_ADDR_PORT` hasn't been set.

Users previously only had access to internal debug metrics (which are unstable
and not recommended) via passing the `--debug` flag to tailscaled, but can now
set `TS_METRICS_ENABLED=true` to expose the stable metrics documented at
https://tailscale.com/kb/1482/client-metrics at `/metrics` on the addr/port
specified by `TS_LOCAL_ADDR_PORT`.

Users can also now configure a debug endpoint more directly via the
`TS_DEBUG_ADDR_PORT` environment variable. This is not recommended for production
use, but exposes an internal set of debug metrics and pprof endpoints.

operator:

The `ProxyClass` CRD's `.spec.metrics.enable` field now enables serving the
stable user metrics documented at https://tailscale.com/kb/1482/client-metrics
at `/metrics` on the same "metrics" container port that debug metrics were
previously served on. To smooth the transition for anyone relying on the way the
operator previously consumed this field, we also _temporarily_ serve tailscaled's
internal debug metrics on the same `/debug/metrics` path as before, until 1.82.0
when debug metrics will be turned off by default even if `.spec.metrics.enable`
is set. At that point, anyone who wishes to continue using the internal debug
metrics (not recommended) will need to set the new `ProxyClass` field
`.spec.statefulSet.pod.tailscaleContainer.debug.enable`.

Users who wish to opt out of the transitional behaviour, where enabling
`.spec.metrics.enable` also enables debug metrics, can set
`.spec.statefulSet.pod.tailscaleContainer.debug.enable` to false (recommended).

Separately but related, the operator will no longer specify a host port for the
"metrics" container port definition. This caused scheduling conflicts when k8s
needs to schedule more than one proxy per node, and was not necessary for allowing
the pod's port to be exposed to prometheus scrapers.

Updates #11292

---------

Co-authored-by: Kristoffer Dalby <kristoffer@tailscale.com>
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-11-22 15:41:07 +00:00
Irbe Krumina
c59ab6baac cmd/k8s-operator/deploy: ensure that operator can write kube state Events (#14177)
A small follow-up to #14112- ensures that the operator itself can emit
Events for its kube state store changes.

Updates tailscale/tailscale#14080

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-22 06:53:46 +00:00
Andrea Gottardo
e3c6ca43d3 cli: present risk warning when setting up app connector on macOS (#14181) 2024-11-21 12:56:41 -08:00
Brad Fitzpatrick
0c8c7c0f90 net/tsaddr: include test input in test failure output
https://go.dev/wiki/CodeReviewComments#useful-test-failures

(Previously it was using subtests with names including the input, but
 once those went away, there was no context left)

Updates #14169

Change-Id: Ib217028183a3d001fe4aee58f2edb746b7b3aa88
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-21 08:32:38 -08:00
Andrew Dunham
af4c3a4a1b cmd/tailscale/cli: create netmon in debug ts2021
Otherwise we'll see a panic if we hit the dnsfallback code and try to
call NewDialer with a nil NetMon.

Updates #14161

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I81c6e72376599b341cb58c37134c2a948b97cf5f
2024-11-20 22:37:26 -05:00
Brad Fitzpatrick
70d1241ca6 util/fastuuid: delete unused package
Its sole user was deleted in 02cafbe1ca.

And it has no public users: https://pkg.go.dev/tailscale.com/util/fastuuid?tab=importedby

And nothing in other Tailsale repos that I can find.

Updates tailscale/corp#24721

Change-Id: I8755770a255a91c6c99f596e6d10c303b3ddf213
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-20 16:55:00 -08:00
Brad Fitzpatrick
02cafbe1ca tsweb: change RequestID format to have a date in it
So we can locate them in logs more easily.

Updates tailscale/corp#24721

Change-Id: Ia766c75608050dde7edc99835979a6e9bb328df2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-20 15:55:09 -08:00
James Scott
ebaf33a80c net/tsaddr: extract IsTailscaleIPv4 from IsTailscaleIP (#14169)
Extracts tsaddr.IsTailscaleIPv4 out of tsaddr.IsTailscaleIP.

This will allow for checking valid Tailscale assigned IPv4 addresses
without checking IPv6 addresses.

Updates #14168
Updates tailscale/corp#24620

Signed-off-by: James Scott <jim@tailscale.com>
2024-11-20 12:28:25 -08:00
Irbe Krumina
ebeb5da202 cmd/k8s-operator,kube/kubeclient,docs/k8s: update rbac to emit events + small fixes (#14164)
This is a follow-up to #14112 where our internal kube client was updated
to allow it to emit Events - this updates our sample kube manifests
and tsrecorder manifest templates so they can benefit from this functionality.

Updates tailscale/tailscale#14080

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-20 14:22:34 +00:00
James Stocker
303a4a1dfb Make the deployment of an IngressClass optional, default to true (#14153)
Fixes tailscale/tailscale#14152
Signed-off-by: James Stocker jamesrstocker@gmail.com

Co-authored-by: James Stocker <james.stocker@intenthq.co.uk>
2024-11-20 06:43:59 +00:00
Anton Tolchanov
9f33aeb649 wgengine/filter: actually use the passed CapTestFunc [capver 109]
Initial support for SrcCaps was added in 5ec01bf but it was not actually
working without this.

Updates #12542

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-11-19 19:18:35 +00:00
Aaron Klotz
48343ee673 util/winutil/s4u: fix token handle leak
Fixes #14156

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2024-11-19 14:11:50 -05:00
Brad Fitzpatrick
810da91a9e version: fix earlier test/wording mistakes
Updates #14069

Change-Id: I1d2fd8a8ab6591af11bfb83748b94342a8ac718f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-19 10:59:21 -08:00
Brad Fitzpatrick
d62baa45e6 version: validate Long format on Android builds
Updates #14069

Change-Id: I134a90db561dacc4b1c1c66ccadac135b5d64cf3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-19 10:04:37 -08:00
License Updater
bb3d0cae5f licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-11-19 09:25:57 -08:00
Irbe Krumina
00517c8189 kube/{kubeapi,kubeclient},ipn/store/kubestore,cmd/{containerboot,k8s-operator}: emit kube store Events (#14112)
Adds functionality to kube client to emit Events.
Updates kube store to emit Events when tailscaled state has been loaded, updated or if any errors where
encountered during those operations.
This should help in cases where an error related to state loading/updating caused the Pod to crash in a loop-
unlike logs of the originally failed container instance, Events associated with the Pod will still be
accessible even after N restarts.

Updates tailscale/tailscale#14080

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-19 13:07:19 +00:00
Brad Fitzpatrick
da70a84a4b ipn/ipnlocal: fix build, remove another Notify.BackendLogID reference that crept in
I merged 5cae7c51bf (removing Notify.BackendLogID) and 93db503565
(adding another reference to Notify.BackendLogID) that didn't have merge
conflicts, but didn't compile together.

This removes the new reference, fixing the build.

Updates #14129

Change-Id: I9bb68efd977342ea8822e525d656817235039a66
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-18 12:17:19 -08:00
Brad Fitzpatrick
93db503565 ipn/ipnlocal: add IPN Bus NotifyRateLimit watch bit NotifyRateLimit
Limit spamming GUIs with boring updates to once in 3 seconds, unless
the notification is relatively interesting and the GUI should update
immediately.

This is basically @barnstar's #14119 but with the logic moved to be
per-watch-session (since the bit is per session), rather than
globally. And this distinguishes notable Notify messages (such as
state changes) and makes them send immediately.

Updates tailscale/corp#24553

Change-Id: I79cac52cce85280ce351e65e76ea11e107b00b49
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-18 10:50:30 -08:00
Andrew Lytvynov
c2a7f17f2b sessionrecording: implement v2 recording endpoint support (#14105)
The v2 endpoint supports HTTP/2 bidirectional streaming and acks for
received bytes. This is used to detect when a recorder disappears to
more quickly terminate the session.

Updates https://github.com/tailscale/corp/issues/24023

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-11-18 09:55:54 -08:00
Brad Fitzpatrick
5cae7c51bf ipn: remove unused Notify.BackendLogID
Updates #14129

Change-Id: I13b5df8765e786a4a919d6b2e72afe987000b2d1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-18 08:36:41 -08:00
Brad Fitzpatrick
f1e1048977 go.mod: bump tailscale/wireguard-go
Updates #11899

Change-Id: Ibd75134a20798c84c7174ba3af639cf22836c7d7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-16 15:31:07 -08:00
Brad Fitzpatrick
3b93fd9c44 net/captivedetection: replace 10k log lines with ... less
We see tons of logs of the form:

    2024/11/15 19:57:29 netcheck: [v2] 76 available captive portal detection endpoints: [Endpoint{URL="http://192.73.240.161/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.240.121/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.240.132/generate_204", StatusCode=204, ExpectedContent="",
11:58SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://209.177.158.246/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://209.177.158.15/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://199.38.182.118/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.243.135/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.243.229/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.243.141/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://45.159.97.144/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://45.159.97.61/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://45.159.97.233/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://45.159.98.196/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://45.159.98.253/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://45.159.98.145/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://68.183.90.120/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://209.177.156.94/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.248.83/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://209.177.156.197/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://199.38.181.104/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://209.177.145.120/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://199.38.181.93/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://199.38.181.103/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://102.67.165.90/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://102.67.165.185/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://102.67.165.36/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.90.147/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.90.207/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.90.104/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://162.248.221.199/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://162.248.221.215/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://162.248.221.248/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://185.34.3.232/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://185.34.3.207/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://185.34.3.75/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://208.83.234.151/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://208.83.233.233/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://208.72.155.133/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://185.40.234.219/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://185.40.234.113/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://185.40.234.77/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://43.245.48.220/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://43.245.48.50/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://43.245.48.250/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.252.65/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.252.134/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://208.111.34.178/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://43.245.49.105/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://43.245.49.83/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://43.245.49.144/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.92.144/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.88.183/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.92.254/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://148.163.220.129/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://148.163.220.134/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://148.163.220.210/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.242.187/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.242.28/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.242.204/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.93.248/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.93.147/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://176.58.93.154/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://192.73.244.245/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://208.111.40.12/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://208.111.40.216/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://103.6.84.152/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://205.147.105.30/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://205.147.105.78/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://102.67.167.245/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://102.67.167.37/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://102.67.167.188/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://103.84.155.178/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://103.84.155.188/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://103.84.155.46/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=true, Provider=DERPMapOther} Endpoint{URL="http://controlplane.tailscale.com/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=false, Provider=Tailscale} Endpoint{URL="http://login.tailscale.com/generate_204", StatusCode=204, ExpectedContent="", SupportsTailscaleChallenge=false, Provider=Tailscale}]

That can be much shorter.

Also add a fast exit path to the concurrency on match. Doing 5 all at
once is still pretty gratuitous, though.

Updates #1634
Fixes #13019

Change-Id: Icdbb16572fca4477b0ee9882683a3ac6eb08e2f2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-15 15:25:31 -08:00
Naman Sood
aefbed323f ipn,tailcfg: add VIPService struct and c2n to fetch them from client (#14046)
* ipn,tailcfg: add VIPService struct and c2n to fetch them from client

Updates tailscale/corp#22743, tailscale/corp#22955

Signed-off-by: Naman Sood <mail@nsood.in>

* more review fixes

Signed-off-by: Naman Sood <mail@nsood.in>

* don't mention PeerCapabilityServicesDestination since it's currently unused

Signed-off-by: Naman Sood <mail@nsood.in>

---------

Signed-off-by: Naman Sood <mail@nsood.in>
2024-11-15 16:14:06 -05:00
Percy Wegmann
1355f622be cmd/derpprobe,prober: add ability to restrict derpprobe to a single region
Updates #24522

Co-authored-by: Mario Minardi <mario@tailscale.com>
Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-11-15 13:42:58 -06:00
Brad Fitzpatrick
c3c4c05331 tstest/integration/testcontrol: remove a vestigial unused parameter
Back in the day this testcontrol package only spoke the
nacl-boxed-based control protocol, which used this.

Then we added ts2021, which didn't, but still sometimes used it.

Then we removed the old mode and didn't remove this parameter
in 2409661a0d.

Updates #11585

Change-Id: Ifd290bd7dbbb52b681b3599786437a15bc98b6a5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-15 10:05:35 -08:00
Brad Fitzpatrick
8fd471ce57 control/controlclient: disable https on for http://localhost:$port URLs
Previously we required the program to be running in a test or have
TS_CONTROL_IS_PLAINTEXT_HTTP before we disabled its https fallback
on "http" schema control URLs to localhost with ports.

But nobody accidentally does all three of "http", explicit port
number, localhost and doesn't mean it. And when they mean it, they're
testing a localhost dev control server (like I was) and don't want 443
getting involved.

As of the changes for #13597, this became more annoying in that we
were trying to use a port which wasn't even available.

Updates #13597

Change-Id: Icd00bca56043d2da58ab31de7aa05a3b269c490f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-14 12:12:16 -08:00
Brad Fitzpatrick
e73cfd9700 go.toolchain.rev: bump from Go 1.23.1 to Go 1.23.3
Updates #14100

Change-Id: I57f9d4260be15ce1daebe4a9782910aba3fb9dc9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-14 10:57:49 -08:00
Brad Fitzpatrick
f593d3c5c0 cmd/tailscale/cli: add "help" alias for --help
Fixes #14053

Change-Id: I0a13e11af089f02b0656fea0d316543c67591fb5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-13 11:08:53 -08:00
dependabot[bot]
bfe5cd8760 .github: Bump actions/setup-go from 5.0.2 to 5.1.0 (#13934)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5.0.2 to 5.1.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](0a12ed9d6a...41dfa10bad)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-13 10:56:44 -07:00
Walter Poupore
0c9ade46a4 words: Add scoville to scales.txt (#14084)
https://en.wikipedia.org/wiki/Scoville_scale

Updates #words

Signed-off-by: Walter Poupore <walterp@tailscale.com>
2024-11-13 09:25:12 -08:00
dependabot[bot]
4474dcea68 .github: Bump actions/cache from 4.1.0 to 4.1.2 (#13933)
Bumps [actions/cache](https://github.com/actions/cache) from 4.1.0 to 4.1.2.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](2cdf405574...6849a64899)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-13 09:46:30 -07:00
dependabot[bot]
0cfa217f3e .github: Bump actions/upload-artifact from 4.4.0 to 4.4.3 (#13811)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.4.0 to 4.4.3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](50769540e7...b4b15b8c7c)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-13 09:34:10 -07:00
dependabot[bot]
1847f26042 .github: Bump github/codeql-action from 3.26.11 to 3.27.1 (#14062)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.26.11 to 3.27.1.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](6db8d6351f...4f3212b617)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-13 09:30:14 -07:00
Naman Sood
7c6562c861 words: scale up our word count (#14082)
Updates tailscale/corp#14698

Signed-off-by: Naman Sood <mail@nsood.in>
2024-11-13 09:56:02 -05:00
Brad Fitzpatrick
0c6bd9a33b words: add a scale
https://portsmouthbrewery.com/shilling-scale/

Any scale that includes "wee heavy" is a scale worth including.

Updates #words

Change-Id: I85fd7a64cf22e14f686f1093a220cb59c43e46ba
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-13 06:09:59 -08:00
Irbe Krumina
cf41cec5a8 cmd/{k8s-operator,containerboot},k8s-operator: remove support for proxies below capver 95. (#13986)
Updates tailscale/tailscale#13984

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-12 17:13:26 +00:00
Irbe Krumina
e38522c081 go.{mod,sum},build_docker.sh: bump mkctr, add ability to set OCI annotations for images (#14065)
Updates tailscale/tailscale#12914

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-12 14:23:38 +00:00
Tom Proctor
d8a3683fdf cmd/k8s-operator: restart ProxyGroup pods less (#14045)
We currently annotate pods with a hash of the tailscaled config so that
we can trigger pod restarts whenever it changes. However, the hash
updates more frequently than is necessary causing more restarts than is
necessary. This commit removes two causes; scaling up/down and removing
the auth key after pods have initially authed to control. However, note
that pods will still restart on scale-up/down because of the updated set
of volumes mounted into each pod. Hopefully we can fix that in a planned
follow-up PR.

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-11-12 14:18:19 +00:00
Brad Fitzpatrick
4e0fc037e6 all: use iterators over slice views more
This gets close to all of the remaining ones.

Updates #12912

Change-Id: I9c672bbed2654a6c5cab31e0cbece6c107d8c6fa
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-11 13:22:34 -08:00
Brad Fitzpatrick
00be1761b7 util/codegen: treat unique.Handle as an opaque value type
It doesn't need a Clone method, like a time.Time, etc.

And then, because Go 1.23+ uses unique.Handle internally for
the netip package types, we can remove those special cases.

Updates #14058 (pulled out from that PR)
Updates tailscale/corp#24485

Change-Id: Iac3548a9417ccda5987f98e0305745a6e178b375
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-11 12:39:19 -08:00
Irbe Krumina
b9ecc50ce3 cmd/k8s-operator,k8s-operator,kube/kubetypes: add an option to configure app connector via Connector spec (#13950)
* cmd/k8s-operator,k8s-operator,kube/kubetypes: add an option to configure app connector via Connector spec

Updates tailscale/tailscale#11113

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-11 11:43:54 +00:00
M. J. Fromberger
6ff85846bc safeweb: add a Shutdown method to the Server type (#14048)
Updates #14047

Change-Id: I2d20454c715b11ad9c6aad1d81445e05a170c3a2
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2024-11-08 10:02:16 -08:00
Anton Tolchanov
64d70fb718 ipn/ipnlocal: log a summary of posture identity response
Perhaps I was too opimistic in #13323 thinking we won't need logs for
this. Let's log a summary of the response without logging specific
identifiers.

Updates tailscale/corp#24437

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-11-08 16:20:07 +00:00
Brad Fitzpatrick
020cacbe70 derp/derphttp: don't link websockets other than on GOOS=js
Or unless the new "ts_debug_websockets" build tag is set.

Updates #1278

Change-Id: Ic4c4f81c1924250efd025b055585faec37a5491d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-07 22:29:41 -08:00
Brad Fitzpatrick
c3306bfd15 control/controlhttp/controlhttpserver: split out Accept to its own package
Otherwise all the clients only using control/controlhttp for the
ts2021 HTTP client were also pulling in WebSocket libraries, as the
server side always needs to speak websockets, but only GOOS=js clients
speak it.

This doesn't yet totally remove the websocket dependency on Linux because
Linux has a envknob opt-in to act like GOOS=js for manual testing and force
the use of WebSockets for DERP only (not control). We can put that behind
a build tag in a future change to eliminate the dep on all GOOSes.

Updates #1278

Change-Id: I4f60508f4cad52bf8c8943c8851ecee506b7ebc9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-07 22:29:41 -08:00
Brad Fitzpatrick
23880eb5b0 cmd/tailscaled: support "ts_omit_ssh" build tag to remove SSH
Some environments would like to remove Tailscale SSH support for the
binary for various reasons when not needed (either for peace of mind,
or the ~1MB of binary space savings).

Updates tailscale/corp#24454
Updates #1278
Updates #12614

Change-Id: Iadd6c5a393992c254b5dc9aa9a526916f96fd07a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-07 16:06:59 -08:00
Irbe Krumina
2c8859c2e7 client/tailscale,ipn/{ipnlocal,localapi}: add a pre-shutdown localAPI endpoint that terminates control connections. (#14028)
Adds a /disconnect-control local API endpoint that just shuts down control client.
This can be run before shutting down an HA subnet router/app connector replica - it will ensure
that all connection to control are dropped and control thus considers this node inactive and tells
peers to switch over to another replica. Meanwhile the existing connections keep working (assuming
that the replica is given some graceful shutdown period).

Updates tailscale/tailscale#14020

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-07 19:27:53 +00:00
Brad Fitzpatrick
3090461961 tsweb/varz: optimize some allocs, add helper func for others
Updates #cleanup
Updates tailscale/corp#23546 (noticed when doing this)

Change-Id: Ia9f627fe32bb4955739b2787210ba18f5de27f4d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-07 08:09:16 -08:00
Irbe Krumina
8ba9b558d2 envknob,kube/kubetypes,cmd/k8s-operator: add app type for ProxyGroup (#14029)
Sets a custom hostinfo app type for ProxyGroup replicas, similarly
to how we do it for all other Kubernetes Operator managed components.

Updates tailscale/tailscale#13406,tailscale/corp#22920

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-07 12:42:29 +00:00
Percy Wegmann
8dcbd988f7 cmd/derper: show more information on home page
- Basic description of DERP

If configured to do so, also show

- Mailto link to security@tailscale.com
- Link to Tailscale Security Policies
- Link to Tailscale Acceptable Use Policy

Updates tailscale/corp#24092

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-11-06 11:06:08 -06:00
License Updater
065825e94c licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-11-05 15:33:17 -08:00
Brad Fitzpatrick
01185e436f types/result, util/lineiter: add package for a result type, use it
This adds a new generic result type (motivated by golang/go#70084) to
try it out, and uses it in the new lineutil package (replacing the old
lineread package), changing that package to return iterators:
sometimes over []byte (when the input is all in memory), but sometimes
iterators over results of []byte, if errors might happen at runtime.

Updates #12912
Updates golang/go#70084

Change-Id: Iacdc1070e661b5fb163907b1e8b07ac7d51d3f83
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-05 10:27:52 -08:00
Irbe Krumina
809a6eba80 cmd/k8s-operator: allow to optionally configure tailscaled port (#14005)
Updates tailscale/tailscale#13981

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-11-04 18:42:51 +00:00
Brad Fitzpatrick
d4222fae95 tsnet: add accessor to get tsd.System
Pulled of otherwise unrelated PR #13884.

Updates tailscale/corp#22075

Change-Id: I5b539fcb4aca1b93406cf139c719a5e3c64ff7f7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-03 09:58:38 -08:00
Brad Fitzpatrick
45da3a4b28 cmd/tsconnect: block after starting esbuild dev server
Thanks to @davidbuzz for raising the issue in #13973.

Fixes #8272
Fixes #13973

Change-Id: Ic413e14d34c82df3c70a97e591b90316b0b4946b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-03 07:30:22 -08:00
VimT
43138c7a5c net/socks5: optimize UDP relay
Key changes:
- No mutex for every udp package: replace syncs.Map with regular map for udpTargetConns
- Use socksAddr as map key for better type safety
- Add test for multi udp target

Updates #7581

Change-Id: Ic3d384a9eab62dcbf267d7d6d268bf242cc8ed3c
Signed-off-by: VimT <me@vimt.me>
2024-11-01 15:47:52 -07:00
VimT
b0626ff84c net/socks5: fix UDP relay in userspace-networking mode
This commit addresses an issue with the SOCKS5 UDP relay functionality
when using the --tun=userspace-networking option. Previously, UDP packets
were not being correctly routed into the Tailscale network in this mode.

Key changes:
- Replace single UDP connection with a map of connections per target
- Use c.srv.dial for creating connections to ensure proper routing

Updates #7581

Change-Id: Iaaa66f9de6a3713218014cf3f498003a7cac9832
Signed-off-by: VimT <me@vimt.me>
2024-11-01 15:47:52 -07:00
Brad Fitzpatrick
634cc2ba4a wgengine/netstack: remove unused taildrive deps
A filesystem was plumbed into netstack in 993acf4475
but hasn't been used since 2d5d6f5403. Remove it.

Noticed while rebasing a Tailscale fork elsewhere.

Updates tailscale/corp#16827

Change-Id: Ib76deeda205ffe912b77a59b9d22853ebff42813
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-11-01 13:40:46 -07:00
Maisem Ali
d09e9d967f ipn/ipnlocal: reload prefs correctly on ReloadConfig
We were only updating the ProfileManager and not going down
the EditPrefs path which meant the prefs weren't applied
till either the process restarted or some other pref changed.

This makes it so that we reconfigure everything correctly when
ReloadConfig is called.

Updates #13032

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-11-01 13:37:46 -07:00
Renato Aguiar
0ffc7bf38b Fix MagicDNS on OpenBSD
Add OpenBSD to the list of platforms that need DNS reconfigured on link changes.

Signed-off-by: Renato Aguiar <renato@renatoaguiar.net>
2024-11-01 10:44:30 -07:00
Jordan Whited
49de23cf1b net/netcheck: add addReportHistoryAndSetPreferredDERP() test case (#13989)
Add an explicit case for exercising preferred DERP hysteresis around
the branch that compares latencies on a percentage basis.

Updates #cleanup

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-10-31 19:25:00 -07:00
Aaron Klotz
84c8860472 util/syspolicy: add policy key for onboarding flow visibility
Updates https://github.com/tailscale/corp/issues/23789

Signed-off-by: Aaron Klotz <aaron@tailscale.com>
2024-10-31 15:46:40 -06:00
Andrew Lytvynov
ddbc950f46 safeweb: add support for custom CSP (#13975)
To allow more flexibility with CSPs, add a fully customizable `CSP` type
that can be provided in `Config` and encodes itself into the correct
format. Preserve the `CSPAllowInlineStyles` option as is today, but
maybe that'll get deprecated later in favor of the new CSP field.

In particular, this allows for pages loading external JS, or inline JS
with nonces or hashes (see
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src#unsafe_inline_script)

Updates https://github.com/tailscale/corp/issues/8027

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-10-31 12:13:29 -07:00
Andrea Gottardo
6985369479 net/sockstats: prevent crash in setNetMon (#13985) 2024-10-31 12:00:34 -07:00
Andrew Lytvynov
3477bfd234 safeweb: add support for "/" and "/foo" handler distinction (#13980)
By counting "/" elements in the pattern we catch many scenarios, but not
the root-level handler. If either of the patterns is "/", compare the
pattern length to pick the right one.

Updates https://github.com/tailscale/corp/issues/8027

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2024-10-31 11:12:38 -07:00
Nick Khyl
3f626c0d77 cmd/tailscale/cli, client/tailscale, ipn/localapi: add tailscale syspolicy {list,reload} commands
In this PR, we add the tailscale syspolicy command with two subcommands: list, which displays
policy settings, and reload, which forces a reload of those settings. We also update the LocalAPI
and LocalClient to facilitate these additions.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-31 10:53:43 -05:00
Irbe Krumina
45354dab9b ipn,tailcfg: add app connector config knob to conffile (#13942)
Make it possible to advertise app connector via a new conffile field.
Also bumps capver - conffile deserialization errors out if unknonw
fields are set, so we need to know which clients understand the new field.

Updates tailscale/tailscale#11113

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-31 14:45:57 +00:00
Anton Tolchanov
b4f46c31bb wgengine/magicsock: export packet drop metric for outbound errors
This required sharing the dropped packet metric between two packages
(tstun and magicsock), so I've moved its definition to util/usermetric.

Updates tailscale/corp#22075

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-10-31 08:33:24 +00:00
Anton Tolchanov
532b26145a wgengine/magicsock: exclude disco from throughput metrics
The user-facing metrics are intended to track data transmitted at
the overlay network level.

Updates tailscale/corp#22075

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-10-31 08:01:19 +00:00
James Tucker
e1e22785b4 net/netcheck: ensure prior preferred DERP is always in netchecks
In an environment with unstable latency, such as upstream bufferbloat,
there are cases where a full netcheck could drop the prior preferred
DERP (likely home DERP) from future netcheck probe plans. This will then
likely result in a home DERP having a missing sample on the next
incremental netcheck, ultimately resulting in a home DERP move.

This change does not fix our overall response to highly unstable
latency, but it is an incremental improvement to prevent single spurious
samples during a full netcheck from alone triggering a flapping
condition, as now the prior changes to include historical latency will
still provide the desired resistance, and the home DERP should not move
unless latency is consistently worse over a 5 minute period.

Note that there is a nomenclature and semantics issue remaining in the
difference between a report preferred DERP and a home DERP. A report
preferred DERP is aspirational, it is what will be picked as a home DERP
if a home DERP connection needs to be established. A nodes home DERP may
be different than a recent preferred DERP, in which case a lot of
netcheck logic is fallible. In future enhancements much of the DERP move
logic should move to consider the home DERP, rather than recent report
preferred DERP.

Updates #8603
Updates #13969

Signed-off-by: James Tucker <james@tailscale.com>
2024-10-30 17:19:26 -07:00
Brad Fitzpatrick
f81348a16b util/syspolicy/source: put EnvPolicyStore env keys in their own namespace
... all prefixed with TS_DEBUGSYSPOLICY_*.

Updates #13193
Updates #12687
Updates #13855

Change-Id: Ia8024946f53e2b3afda4456a7bb85bbcf6d12bfc
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-30 11:27:27 -07:00
Nick Khyl
540e4c83d0 util/syspolicy/setting: make setting.Snapshot JSON-marshallable
We make setting.Snapshot JSON-marshallable in preparation for returning it from the LocalAPI.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-30 12:50:29 -05:00
Nick Khyl
2a2228f97b util/syspolicy/setting: make setting.RawItem JSON-marshallable
We add setting.RawValue, a new type that facilitates unmarshalling JSON numbers and arrays
as uint64 and []string (instead of float64 and []any) for policy setting values.
We then use it to make setting.RawItem JSON-marshallable and update the tests.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-30 12:50:29 -05:00
Nick Khyl
2cc1100d24 util/syspolicy/source: use errors instead of github.com/pkg/errors
Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-30 12:14:36 -05:00
Nick Khyl
2336c340c4 util/syspolicy: implement a syspolicy store that reads settings from environment variables
In this PR, we implement (but do not use yet, pending #13727 review) a syspolicy/source.Store
that reads policy settings from environment variables. It converts a CamelCase setting.Key,
such as AuthKey or ExitNodeID, to a SCREAMING_SNAKE_CASE, TS_-prefixed environment
variable name, such as TS_AUTH_KEY and TS_EXIT_NODE_ID. It then looks up the variable
and attempts to parse it according to the expected value type. If the environment variable
is not set, the policy setting is considered not configured in this store (the syspolicy package
will still read it from other sources). Similarly, if the environment variable has an invalid value
for the setting type, it won't be used (though the reported/logged error will differ).

Updates #13193
Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-30 11:12:22 -05:00
Irbe Krumina
1103044598 cmd/k8s-operator,k8s-operator: add topology spread constraints to ProxyClass (#13959)
Now when we have HA for egress proxies, it makes sense to support topology
spread constraints that would allow users to define more complex
topologies of how proxy Pods need to be deployed in relation with other
Pods/across regions etc.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-30 10:45:31 +00:00
Tim Walters
856ea2376b wgengine/magicsock: log home DERP changes with latency
This adds additional logging on DERP home changes to allow
better troubleshooting.

Updates tailscale/corp#18095

Signed-off-by: Tim Walters <tim@tailscale.com>
2024-10-29 16:05:41 -04:00
Jonathan Nobels
aecb0ab76b tstest/tailmac: add support for mounting host directories in the guest (#13957)
updates tailscale/corp#24197

tailmac run now supports the --share option which will allow you
to specify a directory on the host which can be mounted in the guest
using  mount_virtiofs vmshare <path>.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-10-29 13:49:51 -04:00
Jonathan Nobels
0f9a054cba tstest/tailmac: fix Host.app path generation (#13953)
updates tailscale/corp#24197

Generation of the Host.app path was erroneous and tailmac run
would not work unless the pwd was tailmac/bin.  Now you can
be able to invoke tailmac from anywhere.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2024-10-29 13:49:29 -04:00
Anton Tolchanov
9545e36007 cmd/tailscale/cli: add 'tailscale metrics' command
- `tailscale metrics print`: to show metric values in console
- `tailscale metrics write`: to write metrics to a file (with a tempfile
  & rename dance, which is atomic on Unix).

Also, remove the `TS_DEBUG_USER_METRICS` envknob as we are getting
more confident in these metrics.

Updates tailscale/corp#22075

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-10-29 15:08:36 +00:00
Anton Tolchanov
38af62c7b3 ipn/ipnlocal: remove the primary routes gauge for now
Not confident this is the right way to expose this, so let's remote it
for now.

Updates tailscale/corp#22075

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-10-29 15:07:54 +00:00
Anton Tolchanov
11e96760ff wgengine/magicsock: fix stats packet counter on derp egress
Updates tailscale/corp#22075

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-10-29 15:07:45 +00:00
Anton Tolchanov
94fa6d97c5 ipn/ipnlocal: log errors while fetching serial numbers
If the client cannot fetch a serial number, write a log message helping
the user understand what happened. Also, don't just return the error
immediately, since we still have a chance to collect network interface
addresses.

Updates #5902

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2024-10-29 14:36:08 +00:00
James Tucker
0d76d7d21c tool/gocross: remove trimpath from test builds
trimpath can be inconvenient for IDEs and LSPs that do not always
correctly handle module relative paths, and can also contribute to
caching bugs taking effect. We rarely have a real need for trimpath of
test produced binaries, so avoiding it should be a net win.

Updates #2988
Signed-off-by: James Tucker <james@tailscale.com>
2024-10-28 16:10:55 -07:00
James Tucker
c0a1ed86cb tstest/natlab: add latency & loss simulation
A simple implementation of latency and loss simulation, applied to
writes to the ethernet interface of the NIC. The latency implementation
could be optimized substantially later if necessary.

Updates #13355
Signed-off-by: James Tucker <james@tailscale.com>
2024-10-28 12:49:56 -07:00
License Updater
41aac26106 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-10-28 08:38:18 -07:00
Renato Aguiar
5d07c17b93 net/dns: fix blank lines being added to resolv.conf on OpenBSD (#13928)
During resolv.conf update, old 'search' lines are cleared but '\n' is not
deleted, leaving behind a new blank line on every update.

This adds 's' flag to regexp, so '\n' is included in the match and deleted when
old lines are cleared.

Also, insert missing `\n` when updated 'search' line is appended to resolv.conf.

Signed-off-by: Renato Aguiar <renato@renatoaguiar.net>
2024-10-28 08:00:48 -07:00
Irbe Krumina
9d1348fe21 ipn/store/kubestore: don't error if state cannot be preloaded (#13926)
Preloading of state from kube Secret should not
error if the Secret does not exist.

Updates tailscale/tailscale#7671

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-27 15:54:38 +00:00
Irbe Krumina
853fe3b713 ipn/store/kubestore: cache state in memory (#13918)
Cache state in memory on writes, read from memory
in reads.
kubestore was previously always reading state from a Secret.
This change should fix bugs caused by temporary loss of access
to kube API server and imporove overall performance

Fixes #7671
Updates tailscale/tailscale#12079,tailscale/tailscale#13900

Signed-off-by: Maisem Ali <maisem@tailscale.com>
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
2024-10-26 09:33:47 -05:00
Nick Kirby
6ab39b7bcd cmd/k8s-operator: validate that tailscale.com/tailnet-ip annotation value is a valid IP
Fixes #13836
Signed-off-by: Nick Kirby <nrkirb@gmail.com>
2024-10-26 13:03:36 +01:00
Nick Khyl
e815ae0ec4 util/syspolicy, ipn/ipnlocal: update syspolicy package to utilize syspolicy/rsop
In this PR, we update the syspolicy package to utilize syspolicy/rsop under the hood,
and remove syspolicy.CachingHandler, syspolicy.windowsHandler and related code
which is no longer used.

We mark the syspolicy.Handler interface and RegisterHandler/SetHandlerForTest functions
as deprecated, but keep them temporarily until they are no longer used in other repos.

We also update the package to register setting definitions for all existing policy settings
and to register the Registry-based, Windows-specific policy stores when running on Windows.

Finally, we update existing internal and external tests to use the new API and add a few more
tests and benchmarks.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-25 12:41:07 -05:00
Andrew Dunham
7fe6e50858 net/dns/resolver: fix test flake
Updates #13902

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib2def19caad17367e9a31786ac969278e65f51c6
2024-10-24 13:36:57 -05:00
Paul Scott
212270463b cmd/testwrapper: add pkg runtime to output (#13894)
Fixes #13893

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-10-24 09:41:54 -05:00
Andrew Dunham
b2665d9b89 net/netcheck: add a Now field to the netcheck Report
This allows us to print the time that a netcheck was run, which is
useful in debugging.

Updates #10972

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Id48d30d4eb6d5208efb2b1526a71d83fe7f9320b
2024-10-22 15:52:42 -04:00
Brad Fitzpatrick
ae5bc88ebe health: fix spurious warning about DERP home region '0'
Updates #13650

Change-Id: I6b0f165f66da3f881a4caa25d2d9936dc2a7f22c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-22 10:01:30 -05:00
Maisem Ali
85241f8408 net/tstun: use /10 as subnet for TAP mode; read IP from netmap
Few changes to resolve TODOs in the code:
- Instead of using a hardcoded IP, get it from the netmap.
- Use 100.100.100.100 as the gateway IP
- Use the /10 CGNAT range instead of a random /24

Updates #2589

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-10-21 17:24:29 -07:00
Maisem Ali
d4d21a0bbf net/tstun: restore tap mode functionality
It had bit-rotted likely during the transition to vector io in
76389d8baf. Tested on Ubuntu 24.04
by creating a netns and doing the DHCP dance to get an IP.

Updates #2589

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-10-21 17:02:53 -07:00
Nick Khyl
0f4c9c0ecb cmd/viewer: import types/views when generating a getter for a map field
Fixes #13873

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-21 16:29:16 -05:00
Andrea Gottardo
f8f53bb6d4 health: remove SysDNSOS, add two Warnables for read+set system DNS config (#13874) 2024-10-21 13:40:43 -07:00
Erisa A
72587ab03c scripts/installer.sh: allow Archcraft for Arch packages (#13870)
Fixes #13869

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-10-21 18:13:06 +01:00
Brad Fitzpatrick
c76a6e5167 derp: track client-advertised non-ideal DERP connections in more places
In f77821fd63 (released in v1.72.0), we made the client tell a DERP server
when the connection was not its ideal choice (the first node in its region).

But we didn't do anything with that information until now. This adds a
metric about how many such connections are on a given derper, and also
adds a bit to the PeerPresentFlags bitmask so watchers can identify
(and rebalance) them.

Updates tailscale/corp#372

Change-Id: Ief8af448750aa6d598e5939a57c062f4e55962be
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-20 19:56:28 -07:00
Andrea Gottardo
fd77965f23 net/tlsdial: call out firewalls blocking Tailscale in health warnings (#13840)
Updates tailscale/tailscale#13839

Adds a new blockblame package which can detect common MITM SSL certificates used by network appliances. We use this in `tlsdial` to display a dedicated health warning when we cannot connect to control, and a network appliance MITM attack is detected.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-10-19 00:35:46 +00:00
Mario Minardi
e711ee5d22 release/dist: clamp min / max version for synology package centre (#13857)
Clamp the min and max version for DSM 7.0 and DSM 7.2 packages when we
are building packages for the synology package centre. This change
leaves packages destined for pkgs.tailscale.com with just the min
version set to not break packages in the wild / our update flow.

Updates https://github.com/tailscale/corp/issues/22908

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-10-18 14:20:40 -06:00
Jordan Whited
877fa504b4 net/netcheck: remove arbitrary deadlines from GetReport() tests (#13832)
GetReport() may have side effects when the caller enforces a deadline
that is shorter than ReportTimeout.

Updates #13783
Updates #13394

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-10-18 13:12:07 -07:00
Nick Khyl
874db2173b ipn/{ipnauth,ipnlocal,ipnserver}: send the auth URL to the user who started interactive login
We add the ClientID() method to the ipnauth.Actor interface and updated ipnserver.actor to implement it.
This method returns a unique ID of the connected client if the actor represents one. It helps link a series
of interactions initiated by the client, such as when a notification needs to be sent back to a specific session,
rather than all active sessions, in response to a certain request.

We also add LocalBackend.WatchNotificationsAs and LocalBackend.StartLoginInteractiveAs methods,
which are like WatchNotifications and StartLoginInteractive but accept an additional parameter
specifying an ipnauth.Actor who initiates the operation. We store these actor identities in
watchSession.owner and LocalBackend.authActor, respectively,and implement LocalBackend.sendTo
and related helper methods to enable sending notifications to watchSessions associated with actors
(or, more broadly, identifiable recipients).

We then use the above to change who receives the BrowseToURL notifications:
 - For user-initiated, interactive logins, the notification is delivered only to the user who initiated the
   process. If the initiating actor represents a specific connected client, the URL notification is sent back
   to the same LocalAPI client that called StartLoginInteractive. Otherwise, the notification is sent to all
   clients connected as that user.
   Currently, we only differentiate between users on Windows, as it is inherently a multi-user OS.
 - In all other cases (e.g., node key expiration), we send the notification to all connected users.

Updates tailscale/corp#18342

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-18 15:10:02 -05:00
Jordan Whited
bb60da2764 derp: add sclient write deadline timeout metric (#13831)
Write timeouts can be indicative of stalled TCP streams. Understanding
changes in the rate of such events can be helpful in an ops context.

Updates tailscale/corp#23668

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-10-18 10:53:49 -07:00
Brad Fitzpatrick
18fc093c0d derp: give trusted mesh peers longer write timeouts
Updates tailscale/corp#24014

Change-Id: I700872be48ab337dce8e11cabef7f82b97f0422a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-18 09:37:20 -07:00
Andrew Dunham
c0a9895748 scripts/installer.sh: support DNF5
This fixes the installation on newer Fedora versions that use dnf5 as
the 'dnf' binary.

Updates #13828

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I39513243c81640fab244a32b7dbb3f32071e9fce
2024-10-17 20:28:41 -04:00
Andrea Gottardo
fa95318a47 tool/gocross: add support for tvOS Simulator (#13847)
Updates ENG-5321

Allow gocross to build a static library for the Apple TV Simulator.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-10-17 15:37:10 -07:00
Naman Sood
22c89fcb19 cmd/tailscale,ipn,tailcfg: add tailscale advertise subcommand behind envknob (#13734)
Signed-off-by: Naman Sood <mail@nsood.in>
2024-10-16 19:08:06 -04:00
Mario Minardi
d32d742af0 ipn/ipnlocal: error when trying to use exit node on unsupported platform (#13726)
Adds logic to `checkExitNodePrefsLocked` to return an error when
attempting to use exit nodes on a platform where this is not supported.
This mirrors logic that was added to error out when trying to use `ssh`
on an unsupported platform, and has very similar semantics.

Fixes https://github.com/tailscale/tailscale/issues/13724

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-10-16 14:09:53 -06:00
Brad Fitzpatrick
6a885dbc36 wgengine/magicsock: fix CI-only test warning of missing health tracker
While looking at deflaking TestTwoDevicePing/ping_1.0.0.2_via_SendPacket,
there were a bunch of distracting:

    WARNING: (non-fatal) nil health.Tracker (being strict in CI): ...

This pacifies those so it's easier to work on actually deflaking the test.

Updates #11762
Updates #11874

Change-Id: I08dcb44511d4996b68d5f1ce5a2619b555a2a773
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-16 09:40:49 -07:00
Christian
74dd24ce71 cmd/tsconnect, logpolicy: fixes for wasm_js.go
* updates to LocalBackend require metrics to be passed in which are now initialized
* os.MkdirTemp isn't supported in wasm/js so we simply return empty
  string for logger
* adds a UDP dialer which was missing and led to the dialer being
  incompletely initialized

Fixes #10454 and #8272

Signed-off-by: Christian <christian@devzero.io>
2024-10-16 09:39:48 -07:00
Nick Khyl
ff5f233c3a util/syspolicy: add rsop package that provides access to the resultant policy
In this PR we add syspolicy/rsop package that facilitates policy source registration
and provides access to the resultant policy merged from all registered sources for a
given scope.

Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-16 00:06:14 -05:00
Andrew Dunham
2aa9125ac4 cmd/derpprobe: add /healthz endpoint
For a customer that wants to run their own DERP prober, let's add a
/healthz endpoint that can be used to monitor derpprobe itself.

Updates #6526

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iba315c999fc0b1a93d8c503c07cc733b4c8d5b6b
2024-10-15 16:35:24 -04:00
Tom Proctor
5f22f72636 hostinfo,build_docker.sh,tailcfg: more reliably detect being in a container (#13826)
Our existing container-detection tricks did not work on Kubernetes,
where Docker is no longer used as a container runtime. Extends the
existing go build tags for containers to the other container packages
and uses that to reliably detect builds that were created by Tailscale
for use in a container. Unfortunately this doesn't necessarily improve
detection for users' custom builds, but that's a separate issue.

Updates #13825

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-10-15 19:38:11 +01:00
License Updater
a8f9c0d6e4 licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2024-10-14 08:10:13 -07:00
Kristoffer Dalby
e0d711c478 {net/connstats,wgengine/magicsock}: fix packet counting in connstats
connstats currently increments the packet counter whenever it is called
to store a length of data, however when udp batch sending was introduced
we pass the length for a series of packages, and it is only incremented
ones, making it count wrongly if we are on a platform supporting udp
batches.

Updates tailscale/corp#22075

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-10-14 14:17:56 +02:00
Kristoffer Dalby
40c991f6b8 wgengine: instrument with usermetrics
Updates tailscale/corp#22075

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-10-14 11:34:31 +02:00
Paul Scott
adc8368964 tstest: avoid Fatal in ResourceCheck to show panic (#13790)
Fixes #13789

Signed-off-by: Paul Scott <paul@tailscale.com>
2024-10-14 10:02:04 +01:00
Percy Wegmann
12e6094d9c ssh/tailssh: calculate passthrough environment at latest possible stage
This allows passing through any environment variables that we set ourselves, for example DBUS_SESSION_BUS_ADDRESS.

Updates #11175

Co-authored-by: Mario Minardi <mario@tailscale.com>
Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-10-11 15:25:30 -05:00
Joe Tsai
ecc8035f73 types/bools: add Compare to compare boolean values (#13792)
The bools.Compare function compares boolean values
by reporting -1, 0, +1 for ordering so that it can be easily
used with slices.SortFunc.

Updates #cleanup
Updates tailscale/corp#11038

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-10-11 13:12:18 -07:00
Nick Khyl
f07ff47922 net/dns/resolver: add tests for using a forwarder with multiple upstream resolvers
If multiple upstream DNS servers are available, quad-100 sends requests to all of them
and forwards the first successful response, if any. If no successful responses are received,
it propagates the first failure from any of them.

This PR adds some test coverage for these scenarios.

Updates #13571

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-11 12:02:27 -05:00
Nick Hill
c2144c44a3 net/dns/resolver: update (*forwarder).forwardWithDestChan to always return an error unless it sends a response to responseChan
We currently have two executions paths where (*forwarder).forwardWithDestChan
returns nil, rather than an error, without sending a DNS response to responseChan.

These paths are accompanied by a comment that reads:
// Returning an error will cause an internal retry, there is
// nothing we can do if parsing failed. Just drop the packet.
But it is not (or no longer longer) accurate: returning an error from forwardWithDestChan
does not currently cause a retry.

Moreover, although these paths are currently unreachable due to implementation details,
if (*forwarder).forwardWithDestChan were to return nil without sending a response to
responseChan, it would cause a deadlock at one call site and a panic at another.

Therefore, we update (*forwarder).forwardWithDestChan to return errors in those two paths
and remove comments that were no longer accurate and misleading.

Updates #cleanup
Updates #13571

Signed-off-by: Nick Hill <mykola.khyl@gmail.com>
2024-10-11 12:02:27 -05:00
Nick Hill
e7545f2eac net/dns/resolver: translate 5xx DoH server errors into SERVFAIL DNS responses
If a DoH server returns an HTTP server error, rather than a SERVFAIL within
a successful HTTP response, we should handle it in the same way as SERVFAIL.

Updates #13571

Signed-off-by: Nick Hill <mykola.khyl@gmail.com>
2024-10-11 12:02:27 -05:00
Nick Hill
17335d2104 net/dns/resolver: forward SERVFAIL responses over PeerDNS
As per the docstring, (*forwarder).forwardWithDestChan should either send to responseChan
and returns nil, or returns a non-nil error (without sending to the channel).
However, this does not hold when all upstream DNS servers replied with an error.

We've been handling this special error path in (*Resolver).Query but not in (*Resolver).HandlePeerDNSQuery.
As a result, SERVFAIL responses from upstream servers were being converted into HTTP 503 responses,
instead of being properly forwarded as SERVFAIL within a successful HTTP response, as per RFC 8484, section 4.2.1:
A successful HTTP response with a 2xx status code (see Section 6.3 of [RFC7231]) is used for any valid DNS response,
regardless of the DNS response code. For example, a successful 2xx HTTP status code is used even with a DNS message
whose DNS response code indicates failure, such as SERVFAIL or NXDOMAIN.

In this PR we fix (*forwarder).forwardWithDestChan to no longer return an error when it sends a response to responseChan,
and remove the special handling in (*Resolver).Query, as it is no longer necessary.

Updates #13571

Signed-off-by: Nick Hill <mykola.khyl@gmail.com>
2024-10-11 12:02:27 -05:00
Percy Wegmann
f9949cde8b client/tailscale,cmd/{cli,get-authkey,k8s-operator}: set distinct User-Agents
This helps better distinguish what is generating activity to the
Tailscale public API.

Updates tailscale/corp#23838

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-10-11 10:45:03 -05:00
Jordan Whited
33029d4486 net/netcheck: fix netcheck cli-triggered nil pointer deref (#13782)
Updates #13780

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2024-10-10 15:52:47 -07:00
Jonathan Nobels
acb4a22dcc VERSION.txt: this is v1.77.0 (#13779) 2024-10-10 11:34:14 -07:00
Brad Fitzpatrick
508980603b ipn/conffile: don't depend on hujson on iOS/Android
Fixes #13772

Change-Id: I3ae03a5ee48c801f2e5ea12d1e54681df25d4604
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-10 09:14:36 -07:00
Andrew Dunham
91f58c5e63 tsnet: fix panic caused by logging after test finishes
Updates #13773

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I95e03eb6aef1639bd4a2efd3a415e2c10cdebc5a
2024-10-10 11:11:02 -04:00
Brad Fitzpatrick
1938685d39 clientupdate: don't link distsign on platforms that don't download
Updates tailscale/corp#20099

Change-Id: Ie3b782379b19d5f7890a8d3a378096b4f3e8a612
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-10 06:32:50 -07:00
Irbe Krumina
db1519cc9f k8s-operator/apis: revert ProxyGroup readiness cond name change (#13770)
No need to prefix this with 'Tailscale' for tailscale.com
custom resource types.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-10 13:00:32 +01:00
Brad Fitzpatrick
2531065d10 clientupdate, ipn/localapi: don't use google/uuid, thin iOS deps
We were using google/uuid in two places and that brought in database/sql/driver.

We didn't need it in either place.

Updates #13760
Updates tailscale/corp#20099

Change-Id: Ieed32f1bebe35d35f47ec5a2a429268f24f11f1f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-09 20:27:35 -07:00
Brad Fitzpatrick
fb420be176 safesocket: don't depend on go-ps on iOS
There's never a tailscaled on iOS. And we can't run child processes to
look for it anyway.

Updates tailscale/corp#20099

Change-Id: Ieb3776f4bb440c4f1c442fdd169bacbe17f23ddb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-09 18:35:53 -07:00
Brad Fitzpatrick
367fba8520 control/controlhttp: don't link ts2021 server + websocket code on iOS
We probably shouldn't link it in anywhere, but let's fix iOS for now.

Updates #13762
Updates tailscale/corp#20099

Change-Id: Idac116e9340434334c256acba3866f02bd19827c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-09 18:25:02 -07:00
Joe Tsai
52ef27ab7c taildrop: fix defer in loop (#13757)
However, this affects the scope of a defer.

Updates #11038

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-10-09 14:09:58 -07:00
Joe Tsai
5b7303817e syncs: allocate map with Map.WithLock (#13755)
One primary purpose of WithLock is to mutate the underlying map.
However, this can lead to a panic if it happens to be nil.
Thus, always allocate a map before passing it to f.

Updates tailscale/corp#11038

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-10-09 14:03:37 -07:00
Brad Fitzpatrick
c763b7a7db syncs: delete Map.Range, update callers to iterators
Updates #11038

Change-Id: I2819fed896cc4035aba5e4e141b52c12637373b1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-09 13:56:13 -07:00
Percy Wegmann
2cadb80fb2 util/vizerror: add WrapWithMessage
Thus new function allows constructing vizerrors that combine a message
appropriate for display to users with a wrapped underlying error.

Updates tailscale/corp#23781

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2024-10-09 12:59:25 -05:00
Joe Tsai
910b4e8e6a syncs: add iterators to Map (#13739)
Add Keys, Values, and All to iterate over
all keys, values, and entries, respectively.

Updates #11038

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2024-10-09 10:28:12 -07:00
Irbe Krumina
89ee6bbdae cmd/k8s-operator,k8s-operator/apis: set a readiness condition on egress Services for ProxyGroup (#13746)
cmd/k8s-operator,k8s-operator/apis: set a readiness condition on egress Services

Set a readiness condition on ExternalName Services that define a tailnet target
to route cluster traffic to via a ProxyGroup's proxies. The condition
is set to true if at least one proxy is currently set up to route.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-09 18:23:40 +01:00
Brad Fitzpatrick
94c79659fa types/views: add iterators to the three Map view types
Their callers using Range are all kinda clunky feeling. Iterators
should make them more readable.

Updates #12912

Change-Id: I93461eba8e735276fda4a8558a4ae4bfd6c04922
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-09 10:00:29 -07:00
Irbe Krumina
f6d4d03355 cmd/k8s-operator: don't error out if ProxyClass for ProxyGroup not found. (#13736)
We don't need to error out and continuously reconcile if ProxyClass
has not (yet) been created, once it gets created the ProxyGroup
reconciler will get triggered.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-09 13:23:00 +01:00
Irbe Krumina
60011e73b8 cmd/k8s-operator: fix Pod IP selection (#13743)
Ensure that .status.podIPs is used to select Pod's IP
in all reconcilers.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-09 13:22:50 +01:00
Nick Khyl
da40609abd util/syspolicy, ipn: add "tailscale debug component-logs" support
Fixes #13313
Fixes #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-08 18:11:23 -05:00
Nick Khyl
29cf59a9b4 util/syspolicy/setting: update Snapshot to use Go 1.23 iterators
Updates #12912
Updates #12687

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-08 15:02:23 -05:00
Tom Proctor
07c157ee9f cmd/k8s-operator: base ProxyGroup StatefulSet on common proxy.yaml definition (#13714)
As discussed in #13684, base the ProxyGroup's proxy definitions on the same
scaffolding as the existing proxies, as defined in proxy.yaml

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-10-08 20:05:08 +01:00
Tom Proctor
83efadee9f kube/egressservices: improve egress ports config readability (#13722)
Instead of converting our PortMap struct to a string during marshalling
for use as a key, convert the whole collection of PortMaps to a list of
PortMap objects, which improves the readability of the JSON config while
still keeping the data structure we need in the code.

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-10-08 19:48:18 +01:00
Brad Fitzpatrick
841eaacb07 net/sockstats: quiet some log spam in release builds
Updates #13731

Change-Id: Ibee85426827ebb9e43a1c42a9c07c847daa50117
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-08 11:02:46 -07:00
Irbe Krumina
861dc3631c cmd/{k8s-operator,containerboot},kube/egressservices: fix Pod IP check for dual stack clusters (#13721)
Currently egress Services for ProxyGroup only work for Pods and Services
with IPv4 addresses. Ensure that it works on dual stack clusters by reading
proxy Pod's IP from the .status.podIPs list that always contains both
IPv4 and IPv6 address (if the Pod has them) rather than .status.podIP that
could contain IPv6 only for a dual stack cluster.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-08 18:35:23 +01:00
Andrew Dunham
8ee7f82bf4 net/netcheck: don't panic if a region has no Nodes
Updates #13728

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1e8319d6b2da013ae48f15113b30c9333e69cc0b
2024-10-08 12:52:27 -04:00
Tom Proctor
36cb2e4e5f cmd/k8s-operator,k8s-operator: use default ProxyClass if set for ProxyGroup (#13720)
The default ProxyClass can be set via helm chart or env var, and applies
to all proxies that do not otherwise have an explicit ProxyClass set.
This ensures proxies created by the new ProxyGroup CRD are consistent
with the behaviour of existing proxies

Nearby but unrelated changes:

* Fix up double error logs (controller runtime logs returned errors)
* Fix a couple of variable names

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-10-08 17:34:34 +01:00
Tom Proctor
cba2e76568 cmd/containerboot: simplify k8s setup logic (#13627)
Rearrange conditionals to reduce indentation and make it a bit easier to read
the logic. Also makes some error message updates for better consistency
with the recent decision around capitalising resource names and the
upcoming addition of config secrets.

Updates #cleanup

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-10-08 17:13:00 +01:00
dependabot[bot]
866714a894 .github: Bump github/codeql-action from 3.26.9 to 3.26.11 (#13710)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.26.9 to 3.26.11.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](461ef6c76d...6db8d6351f)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-07 22:15:40 -06:00
dependabot[bot]
266c14d6ca .github: Bump actions/cache from 4.0.2 to 4.1.0 (#13711)
Bumps [actions/cache](https://github.com/actions/cache) from 4.0.2 to 4.1.0.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](0c45773b62...2cdf405574)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-07 20:48:06 -06:00
Nick Hill
9a73462ea4 types/lazy: add DeferredInit type
It is sometimes necessary to defer initialization steps until the first actual usage
or until certain prerequisites have been met. For example, policy setting and
policy source registration should not occur during package initialization.
Instead, they should be deferred until the syspolicy package is actually used.
Additionally, any errors should be properly handled and reported, rather than
causing a panic within the package's init function.

In this PR, we add DeferredInit, to facilitate the registration and invocation
of deferred initialization functions.

Updates #12687

Signed-off-by: Nick Hill <mykola.khyl@gmail.com>
2024-10-07 15:43:22 -05:00
Brad Fitzpatrick
f3de4e96a8 derp: fix omitted word in comment
Fix comment just added in 38f236c725.

Updates tailscale/corp#23668
Updates #cleanup

Change-Id: Icbe112e24fcccf8c61c759c631ad09f3e5480547
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-07 12:21:10 -07:00
Irbe Krumina
7f016baa87 cmd/k8s-operator,k8s-operator: create ConfigMap for egress services + small fixes for egress services (#13715)
cmd/k8s-operator, k8s-operator: create ConfigMap for egress services + small reconciler fixes

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-07 20:12:56 +01:00
Brad Fitzpatrick
38f236c725 derp: add server metric for batch write sizes
Updates tailscale/corp#23668

Change-Id: Ie6268c4035a3b29fd53c072c5793e4cbba93d031
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-07 11:22:51 -07:00
Erisa A
c588c36233 types/key: use tlpub: in error message (#13707)
Fixes tailscale/corp#19442

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-10-07 17:28:45 +01:00
Brad Fitzpatrick
cb10eddc26 tool/gocross: fix argument order to find
To avoid warning:

    find: warning: you have specified the global option -maxdepth after the argument -type, but global options are not positional, i.e., -maxdepth affects tests specified before it as well as those specified after it.  Please specify global options before other arguments.

Fixes tailscale/corp#23689

Change-Id: I91ee260b295c552c0a029883d5e406733e081478
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-07 08:07:03 -07:00
Tom Proctor
e48cddfbb3 cmd/{containerboot,k8s-operator},k8s-operator,kube: add ProxyGroup controller (#13684)
Implements the controller for the new ProxyGroup CRD, designed for
running proxies in a high availability configuration. Each proxy gets
its own config and state Secret, and its own tailscale node ID.

We are currently mounting all of the config secrets into the container,
but will stop mounting them and instead read them directly from the kube
API once #13578 is implemented.

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-10-07 14:58:45 +01:00
Brad Fitzpatrick
1005cbc1e4 tailscaleroot: panic if tailscale_go build tag but Go toolchain mismatch
Fixes #13527

Change-Id: I05921969a84a303b60d1b3b9227aff9865662831
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-06 15:22:04 -07:00
Brad Fitzpatrick
c48cc08de2 wgengine: stop conntrack log spam about Canonical net probes
Like we do for the ones on iOS.

As a bonus, this removes a caller of tsaddr.IsTailscaleIP which we
want to revamp/remove soonish.

Updates #13687

Change-Id: Iab576a0c48e9005c7844ab52a0aba5ba343b750e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-05 12:51:55 -07:00
Andrew Dunham
12f1bc7c77 envknob: support disk-based envknobs on the macsys build
Per my investigation just now, the $HOME environment variable is unset
on the macsys (standalone macOS GUI) variant, but the current working
directory is valid. Look for the environment variable file in that
location in addition to inside the home directory.

Updates #3707

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I481ae2e0d19b316244373e06865e3b5c3a9f3b88
2024-10-04 17:12:27 -04:00
Patrick O'Doherty
4ad3f01225 safeweb: allow passing http.Server in safeweb.Config (#13688)
Extend safeweb.Config with the ability to pass a http.Server that
safeweb will use to server traffic.

Updates corp#8207

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-10-04 11:57:00 -07:00
kari-ts
8fdffb8da0 hostinfo: update SetPackage doc with new Android values (#13537)
Fixes tailscale/corp#23283

Signed-off-by: kari-ts <kari@tailscale.com>
2024-10-04 16:35:19 +00:00
Erisa A
f30d85310c cmd/tailscale/cli: don't print disablement secrets if init fails (#13673)
* cmd/tailscale/cli: don't print disablement secrets if init fails

Fixes tailscale/corp#11355

Signed-off-by: Erisa A <erisa@tailscale.com>

* cmd/tailscale/cli: changes from code review

Signed-off-by: Erisa A <erisa@tailscale.com>

* cmd/tailscale/cli: small grammar change

Signed-off-by: Erisa A <erisa@tailscale.com>

---------

Signed-off-by: Erisa A <erisa@tailscale.com>
2024-10-04 16:01:48 +01:00
Irbe Krumina
e8bb5d1be5 cmd/{k8s-operator,containerboot},k8s-operator,kube: reconcile ExternalName Services for ProxyGroup (#13635)
Adds a new reconciler that reconciles ExternalName Services that define a
tailnet target that should be exposed to cluster workloads on a ProxyGroup's
proxies.
The reconciler ensures that for each such service, the config mounted to
the proxies is updated with the tailnet target definition and that
and EndpointSlice and ClusterIP Service are created for the service.

Adds a new reconciler that ensures that as proxy Pods become ready to route
traffic to a tailnet target, the EndpointSlice for the target is updated
with the Pods' endpoints.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-04 13:11:35 +01:00
Irbe Krumina
9bd158cc09 cmd/containerboot,util/linuxfw: create a SNAT rule for dst/src only once, clean up if needed (#13658)
The AddSNATRuleForDst rule was adding a new rule each time it was called including:
- if a rule already existed
- if a rule matching the destination, but with different desired source already existed

This was causing issues especially for the in-progress egress HA proxies work,
where the rules are now refreshed more frequently, so more redundant rules
were being created.

This change:
- only creates the rule if it doesn't already exist
- if a rule for the same dst, but different source is found, delete it
- also ensures that egress proxies refresh firewall rules
if the node's tailnet IP changes

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-10-03 20:15:00 +01:00
Patrick O'Doherty
a3c6a3a34f safeweb: add StrictTransportSecurityOptions config (#13679)
Add the ability to specify Strict-Transport-Security options in response
to BrowserMux HTTP requests in safeweb.

Updates https://github.com/tailscale/corp/issues/23375

Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
2024-10-03 18:38:29 +00:00
Brad Fitzpatrick
dc60c8d786 ssh/tailssh: pass window size pixels in IoctlSetWinsize events
Fixes #13669

Change-Id: Id44cfbb83183f1bbcbdc38c29238287b9d288707
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-03 09:24:28 -07:00
Andrea Gottardo
58c6bc2991 logpolicy: force TLS 1.3 handshake
Updates tailscale/tailscale#3363

We know `log.tailscale.io` supports TLS 1.3, so we can enforce its usage in the client to shake some bytes off the TLS handshake each time a connection is opened to upload logs.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-10-03 09:16:23 -07:00
Brad Fitzpatrick
5f88b65764 wgengine/netstack: check userspace ping success on Windows
Hacky temporary workaround until we do #13654 correctly.

Updates #13654

Change-Id: I764eaedbb112fb3a34dddb89572fec1b2543fd4a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-03 09:07:39 -07:00
Brad Fitzpatrick
1f8eea53a8 control/controlclient: include HTTP status string in error message too
Not just its code.

Updates tailscale/corp#23584

Change-Id: I8001a675372fe15da797adde22f04488d8683448
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-03 08:37:16 -07:00
Brad Fitzpatrick
6f694da912 wgengine/magicsock: avoid log spam from ReceiveFunc on shutdown
The new logging in 2dd71e64ac is spammy at shutdown:

    Receive func ReceiveIPv6 exiting with error: *net.OpError, read udp [::]:38869: raw-read udp6 [::]:38869: use of closed network connection
    Receive func ReceiveIPv4 exiting with error: *net.OpError, read udp 0.0.0.0:36123: raw-read udp4 0.0.0.0:36123: use of closed network connection

Skip it if we're in the process of shutting down.

Updates #10976

Change-Id: I4f6d1c68465557eb9ffe335d43d740e499ba9786
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 20:22:12 -07:00
Naman Sood
09ec2f39b5 tailcfg: add func to check for known valid ServiceProtos (#13668)
Updates tailscale/corp#23574.

Signed-off-by: Naman Sood <mail@nsood.in>
2024-10-02 22:54:02 -04:00
Brad Fitzpatrick
383120c534 ipn/ipnlocal: don't run portlist code unless service collection is on
We were selectively uploading it, but we were still gathering it,
which can be a waste of CPU.

Also remove a bunch of complexity that I don't think matters anymore.

And add an envknob to force service collection off on a single node,
even if the tailnet policy permits it.

Fixes #13463

Change-Id: Ib6abe9e29d92df4ffa955225289f045eeeb279cf
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 18:08:31 -07:00
Nick Khyl
d837e0252f wf/firewall: allow link-local multicast for permitted local routes when the killswitch is on on Windows
When an Exit Node is used, we create a WFP rule to block all inbound and outbound traffic,
along with several rules to permit specific types of traffic. Notably, we allow all inbound and
outbound traffic to and from LocalRoutes specified in wgengine/router.Config. The list of allowed
routes always includes routes for internal interfaces, such as loopback and virtual Hyper-V/WSL2
interfaces, and may also include LAN routes if the "Allow local network access" option is enabled.
However, these permitting rules do not allow link-local multicast on the corresponding interfaces.
This results in broken mDNS/LLMNR, and potentially other similar issues, whenever an exit node is used.

In this PR, we update (*wf.Firewall).UpdatePermittedRoutes() to create rules allowing outbound and
inbound link-local multicast traffic to and from the permitted IP ranges, partially resolving the mDNS/LLMNR
and *.local name resolution issue.

Since Windows does not attempt to send mDNS/LLMNR queries if a catch-all NRPT rule is present,
it is still necessary to disable the creation of that rule using the disable-local-dns-override-via-nrpt nodeAttr.

Updates #13571

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-02 18:36:01 -05:00
Brad Fitzpatrick
b8af93310a tstest: add the start of a testing wishlist
Of tests we wish we could easily add. One day.

Updates #13038

Change-Id: If44646f8d477674bbf2c9a6e58c3cd8f94a4e8df
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 16:08:41 -07:00
Andrea Gottardo
6de6ab015f net/dns: tweak DoH timeout, limit MaxConnsPerHost, require TLS 1.3 (#13564)
Updates tailscale/tailscale#6148

This is the result of some observations we made today with @raggi. The DNS over HTTPS client currently doesn't cap the number of connections it uses, either in-use or idle. A burst of DNS queries will open multiple connections. Idle connections remain open for 30 seconds (this interval is defined in the dohTransportTimeout constant). For DoH providers like NextDNS which send keep-alives, this means the cellular modem will remain up more than expected to send ACKs if any keep-alives are received while a connection remains idle during those 30 seconds. We can set the IdleConnTimeout to 10 seconds to ensure an idle connection is terminated if no other DNS queries come in after 10 seconds. Additionally, we can cap the number of connections to 1. This ensures that at all times there is only one open DoH connection, either active or idle. If idle, it will be terminated within 10 seconds from the last query.

We also observed all the DoH providers we support are capable of TLS 1.3. We can force this TLS version to reduce the number of packets sent/received each time a TLS connection is established.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-10-02 09:26:11 -07:00
Brad Fitzpatrick
a01b545441 control/control{client,http}: don't noise dial localhost:443 in http-only tests
1eaad7d3de regressed some tests in another repo that were starting up
a control server on `http://127.0.0.1:nnn`. Because there was no https
running, and because of a bug in 1eaad7d3de (which ended up checking
the recently-dialed-control check twice in a single dial call), we
ended up forcing only the use of TLS dials in a test that only had
plaintext HTTP running.

Instead, plumb down support for explicitly disabling TLS fallbacks and
use it only when running in a test and using `http` scheme control
plane URLs to 127.0.0.1 or localhost.

This fixes the tests elsewhere.

Updates #13597

Change-Id: I97212ded21daf0bd510891a278078daec3eebaa6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 10:41:08 -05:00
Brad Fitzpatrick
6b03e18975 control/controlhttp: rename a param from addr to optAddr for clarity
And update docs.

Updates #cleanup
Updates #13597 (tangentially; noted this cleanup while debugging)

Change-Id: I62440294c78b0bb3f5673be10318dd89af1e1bfe
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 10:41:08 -05:00
Brad Fitzpatrick
f49d218cfe net/dnscache: don't fall back to an IPv6 dial if we don't have IPv6
I noticed while debugging a test failure elsewhere that our failure
logs (when verbosity is cranked up) were uselessly attributing dial
failures to failure to dial an invalid IP address (this IPv6 address
we didn't have), rather than showing me the actual IPv4 connection
failure.

Updates #13597 (tangentially)

Change-Id: I45ffbefbc7e25ebfb15768006413a705b941dae5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 10:41:08 -05:00
Brad Fitzpatrick
30f0fa95d9 control/controlclient: bound ReportHealthChange context lifetime to Direct client's
Fixes #13651

Change-Id: I8154d3cc0ca40fe7a0223b26ae2e77e8d6ba874b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-02 10:40:39 -05:00
Andrea Gottardo
ed1ac799c8 net/captivedetection: set Timeout on net.Dialer (#13613)
Updates tailscale/tailscale#1634
Updates tailscale/tailscale#13265

Captive portal detection uses a custom `net.Dialer` in its `http.Client`. This custom Dialer ensures that the socket is bound specifically to the Wi-Fi interface. This is crucial because without it, if any default routes are set, the outgoing requests for detecting a captive portal would bypass Wi-Fi and go through the default route instead.

The Dialer did not have a Timeout property configured, so the default system timeout was applied. This caused issues in #13265, where we attempted to make captive portal detection requests over an IPsec interface used for Wi-Fi Calling. The call to `connect()` would fail and remain blocked until the system timeout (approximately 1 minute) was reached.

In #13598, I simply excluded the IPsec interface from captive portal detection. This was a quick and safe mitigation for the issue. This PR is a follow-up to make the process more robust, by setting a 3 seconds timeout on any connection establishment on any interface (this is the same timeout interval we were already setting on the HTTP client).

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-10-02 15:29:46 +00:00
Nick Khyl
e66fe1f2e8 docs/windows/policy: add ADMX policy setting to configure the AuthKey
Updates tailscale/corp#22120

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2024-10-02 09:19:19 -05:00
dependabot[bot]
992ee6dd0b .github: Bump github/codeql-action from 3.26.8 to 3.26.9 (#13625)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.26.8 to 3.26.9.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](294a9d9291...461ef6c76d)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-01 23:27:30 -06:00
Brad Fitzpatrick
262c526c4e net/portmapper: don't treat 0.0.0.0 as a valid IP
Updates tailscale/corp#23538

Change-Id: I58b8c30abe43f1d1829f01eb9fb2c1e6e8db9476
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-01 16:11:47 -05:00
Andrew Dunham
16ef88754d net/portmapper: don't return unspecified/local external IPs
We were previously not checking that the external IP that we got back
from a UPnP portmap was a valid endpoint; add minimal validation that
this endpoint is something that is routeable by another host.

Updates tailscale/corp#23538

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Id9649e7683394aced326d5348f4caa24d0efd532
2024-10-01 14:13:40 -04:00
Brad Fitzpatrick
1eaad7d3de control/controlhttp: fix connectivity on Alaska Air wifi
Updates #13597

Change-Id: Ifbf52b93fd35d64fcf80f8fddbfd610008fd8742
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-01 11:58:20 -05:00
Brad Fitzpatrick
fd32f0ddf4 control/controlhttp: factor out some code in prep for future change
This pulls out the clock and forceNoise443 code into methods on the
Dialer as cleanup in its own commit to make a future change less
distracting.

Updates #13597

Change-Id: I7001e57fe7b508605930c5b141a061b6fb908733
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-01 11:28:59 -05:00
Brad Fitzpatrick
d3f302d8e2 cmd/tailscale/cli: make 'tailscale debug ts2021' try twice
In prep for a future port 80 MITM fix, make the 'debug ts2021' command
retry once after a failure to give it a chance to pick a new strategy.

Updates #13597

Change-Id: Icb7bad60cbf0dbec78097df4a00e9795757bc8e4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2024-10-01 11:28:59 -05:00
Mario Minardi
8f44ba1cd6 ssh: Add logic to set accepted environment variables in SSH session (#13559)
Add logic to set environment variables that match the SSH rule's
`acceptEnv` settings in the SSH session's environment.

Updates https://github.com/tailscale/corp/issues/22775

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-09-30 21:47:45 -06:00
dependabot[bot]
dd6b808acf .github: Bump peter-evans/create-pull-request from 7.0.1 to 7.0.5 (#13626)
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.1 to 7.0.5.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](8867c4aba1...5e914681df)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-30 21:12:44 -06:00
Anton Tolchanov
a70287d324 logpolicy: don't create a filch buffer if logging is disabled
Updates #9549

Signed-off-by: Anton Tolchanov <commits@knyar.net>
2024-09-30 11:36:08 +02:00
Maisem Ali
fb0f8fc0ae cmd/tsidp: add --dir flag
To better control where the tsnet state is being stored.

Updates #10263

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2024-09-29 16:15:22 -07:00
Irbe Krumina
096b090caf cmd/containerboot,kube,util/linuxfw: configure kube egress proxies to route to 1+ tailnet targets (#13531)
* cmd/containerboot,kube,util/linuxfw: configure kube egress proxies to route to 1+ tailnet targets

This commit is first part of the work to allow running multiple
replicas of the Kubernetes operator egress proxies per tailnet service +
to allow exposing multiple tailnet services via each proxy replica.

This expands the existing iptables/nftables-based proxy configuration
mechanism.

A proxy can now be configured to route to one or more tailnet targets
via a (mounted) config file that, for each tailnet target, specifies:
- the target's tailnet IP or FQDN
- mappings of container ports to which cluster workloads will send traffic to
tailnet target ports where the traffic should be forwarded.

Example configfile contents:
{
  "some-svc": {"tailnetTarget":{"fqdn":"foo.tailnetxyz.ts.net","ports"{"tcp:4006:80":{"protocol":"tcp","matchPort":4006,"targetPort":80},"tcp:4007:443":{"protocol":"tcp","matchPort":4007,"targetPort":443}}}}
}

A proxy that is configured with this config file will configure firewall rules
to route cluster traffic to the tailnet targets. It will then watch the config file
for updates as well as monitor relevant netmap updates and reconfigure firewall
as needed.

This adds a bunch of new iptables/nftables functionality to make it easier to dynamically update
the firewall rules without needing to restart the proxy Pod as well as to make
it easier to debug/understand the rules:

- for iptables, each portmapping is a DNAT rule with a comment pointing
at the 'service',i.e:

-A PREROUTING ! -i tailscale0 -p tcp -m tcp --dport 4006 -m comment --comment "some-svc:tcp:4006 -> tcp:80" -j DNAT --to-destination 100.64.1.18:80
Additionally there is a SNAT rule for each tailnet target, to mask the source address.

- for nftables, a separate prerouting chain is created for each tailnet target
and all the portmapping rules are placed in that chain. This makes it easier
to look up rules and delete services when no longer needed.
(nftables allows hooking a custom chain to a prerouting hook, so no extra work
is needed to ensure that the rules in the service chains are evaluated).

The next steps will be to get the Kubernetes Operator to generate
the configfile and ensure it is mounted to the relevant proxy nodes.

Updates tailscale/tailscale#13406

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-09-29 16:30:53 +01:00
Irbe Krumina
c62b0732d2 cmd/k8s-operator: remove auth key once proxy has logged in (#13612)
The operator creates a non-reusable auth key for each of
the cluster proxies that it creates and puts in the tailscaled
configfile mounted to the proxies.
The proxies are always tagged, and their state is persisted
in a Kubernetes Secret, so their node keys are expected to never
be regenerated, so that they don't need to re-auth.

Some tailnet configurations however have seen issues where the auth
keys being left in the tailscaled configfile cause the proxies
to end up in unauthorized state after a restart at a later point
in time.
Currently, we have not found a way to reproduce this issue,
however this commit removes the auth key from the config once
the proxy can be assumed to have logged in.

If an existing, logged-in proxy is upgraded to this version,
its redundant auth key will be removed from the conffile.

If an existing, logged-in proxy is downgraded from this version
to a previous version, it will work as before without re-issuing key
as the previous code did not enforce that a key must be present.

Updates tailscale/tailscale#13451

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2024-09-27 17:47:27 +01:00
Kristoffer Dalby
77832553e5 ipn/ipnlocal: add advertised and primary route metrics
Updates tailscale/corp#22075

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-09-27 16:05:14 +02:00
Tom Proctor
cab2e6ea67 cmd/k8s-operator,k8s-operator: add ProxyGroup CRD (#13591)
The ProxyGroup CRD specifies a set of N pods which will each be a
tailnet device, and will have M different ingress or egress services
mapped onto them. It is the mechanism for specifying how highly
available proxies need to be. This commit only adds the definition, no
controller loop, and so it is not currently functional.

This commit also splits out TailnetDevice and RecorderTailnetDevice
into separate structs because the URL field is specific to recorders,
but we want a more generic struct for use in the ProxyGroup status field.

Updates #13406

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2024-09-27 01:05:56 +01:00
Andrew Dunham
7ec8bdf8b1 go.mod: upgrade golangci-lint
To pull in the fix for mgechev/revive#863 - seen in the GitHub Actions
check below:
    https://github.com/tailscale/tailscale/actions/runs/11057524933/job/30721507353?pr=13600

Updates #13602

Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ia04adc5d74bdbde14204645ca948794447b16776
2024-09-26 17:08:54 -04:00
Andrea Gottardo
69be54c7b6 net/captivedetection: exclude ipsec interfaces from captive portal detection (#13598)
Updates tailscale/tailscale#1634

Logs from some iOS users indicate that we're pointlessly performing captive portal detection on certain interfaces named ipsec*. These are tunnels with the cellular carrier that do not offer Internet access, and are only used to provide internet calling functionality (VoLTE / VoWiFi).

```
attempting to do captive portal detection on interface ipsec1
attempting to do captive portal detection on interface ipsec6
```

This PR excludes interfaces with the `ipsec` prefix from captive portal detection.

Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
2024-09-26 17:28:10 +00:00
Kristoffer Dalby
5550a17391 wgengine: make opts.Metrics mandatory
Fixes #13582

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-09-26 13:09:47 +02:00
Kristoffer Dalby
7d1160ddaa {ipn,net,tsnet}: use tsaddr helpers
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-09-26 12:17:31 +02:00
Kristoffer Dalby
f03e82a97c client/web: use tsaddr helpers
Updates #cleanup

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-09-26 12:17:31 +02:00
Kristoffer Dalby
0909431660 cmd/tailscale: use tsaddr helpers
Updates #cleanup

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-09-26 12:17:31 +02:00
Kristoffer Dalby
3dc33a0a5b net/tsaddr: add WithoutExitRoutes and IsExitRoute
Updates #cleanup

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-09-26 12:17:31 +02:00
Mario Minardi
c90c9938c8 ssh/tailssh: add logic for matching against AcceptEnv patterns (#13466)
Add logic for parsing and matching against our planned format for
AcceptEnv values. Namely, this supports direct matches against string
values and matching where * and ? are treated as wildcard characters
which match against an arbitrary number of characters and a single
character respectively.

Actually using this logic in non-test code will come in subsequent
changes.

Updates https://github.com/tailscale/corp/issues/22775

Signed-off-by: Mario Minardi <mario@tailscale.com>
2024-09-25 21:09:05 -06:00
James Tucker
9eb59c72c1 wgengine/magicsock: fix check for EPERM on macOS
Like Linux, macOS will reply to sendto(2) with EPERM if the firewall is
currently blocking writes, though this behavior is like Linux
undocumented. This is often caused by a faulting network extension or
content filter from EDR software.

Updates #11710
Updates #12891
Updates #13511

Signed-off-by: James Tucker <james@tailscale.com>
2024-09-25 16:33:36 -07:00
431 changed files with 29675 additions and 5922 deletions

View File

@@ -49,13 +49,13 @@ jobs:
# Install a more recent Go that understands modern go.mod content.
- name: Install Go
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version-file: go.mod
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@294a9d92911152fe08befb9ec03e240add280cb3 # v3.26.8
uses: github/codeql-action/init@aa578102511db1f4524ed59b8cc2bae4f6e88195 # v3.27.6
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
@@ -66,7 +66,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@294a9d92911152fe08befb9ec03e240add280cb3 # v3.26.8
uses: github/codeql-action/autobuild@aa578102511db1f4524ed59b8cc2bae4f6e88195 # v3.27.6
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
@@ -80,4 +80,4 @@ jobs:
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@294a9d92911152fe08befb9ec03e240add280cb3 # v3.26.8
uses: github/codeql-action/analyze@aa578102511db1f4524ed59b8cc2bae4f6e88195 # v3.27.6

View File

@@ -25,7 +25,7 @@ jobs:
steps:
- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
- uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version-file: go.mod
cache: false

View File

@@ -6,11 +6,13 @@ on:
- "main"
paths:
- scripts/installer.sh
- .github/workflows/installer.yml
pull_request:
branches:
- "*"
paths:
- scripts/installer.sh
- .github/workflows/installer.yml
jobs:
test:
@@ -29,10 +31,9 @@ jobs:
- "debian:stable-slim"
- "debian:testing-slim"
- "debian:sid-slim"
- "ubuntu:18.04"
- "ubuntu:20.04"
- "ubuntu:22.04"
- "ubuntu:23.04"
- "ubuntu:24.04"
- "elementary/docker:stable"
- "elementary/docker:unstable"
- "parrotsec/core:lts-amd64"
@@ -48,7 +49,7 @@ jobs:
- "opensuse/leap:latest"
- "opensuse/tumbleweed:latest"
- "archlinux:latest"
- "alpine:3.14"
- "alpine:3.21"
- "alpine:latest"
- "alpine:edge"
deps:
@@ -58,10 +59,6 @@ jobs:
# Check a few images with wget rather than curl.
- { image: "debian:oldstable-slim", deps: "wget" }
- { image: "debian:sid-slim", deps: "wget" }
- { image: "ubuntu:23.04", deps: "wget" }
# Ubuntu 16.04 also needs apt-transport-https installed.
- { image: "ubuntu:16.04", deps: "curl apt-transport-https" }
- { image: "ubuntu:16.04", deps: "wget apt-transport-https" }
runs-on: ubuntu-latest
container:
image: ${{ matrix.image }}

View File

@@ -80,7 +80,7 @@ jobs:
- name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Restore Cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
@@ -153,13 +153,13 @@ jobs:
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Install Go
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32 # v5.0.2
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version-file: go.mod
cache: false
- name: Restore Cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
@@ -260,7 +260,7 @@ jobs:
- name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Restore Cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
@@ -319,7 +319,7 @@ jobs:
- name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Restore Cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
@@ -367,7 +367,7 @@ jobs:
- name: checkout
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
- name: Restore Cache
uses: actions/cache@0c45773b623bea8c8e75f6c82b208c3cf94ea4f9 # v4.0.2
uses: actions/cache@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
@@ -461,7 +461,7 @@ jobs:
run: |
echo "artifacts_path=$(realpath .)" >> $GITHUB_ENV
- name: upload crash
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874 # v4.4.0
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4.4.3
if: steps.run.outcome != 'success' && steps.build.outcome == 'success'
with:
name: artifacts

View File

@@ -36,7 +36,7 @@ jobs:
private_key: ${{ secrets.LICENSING_APP_PRIVATE_KEY }}
- name: Send pull request
uses: peter-evans/create-pull-request@8867c4aba1b742c39f8d0ba35429c2dfa4b6cb20 #v7.0.1
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f #v7.0.5
with:
token: ${{ steps.generate-token.outputs.token }}
author: Flakes Updater <noreply+flakes-updater@tailscale.com>

View File

@@ -35,7 +35,7 @@ jobs:
- name: Send pull request
id: pull-request
uses: peter-evans/create-pull-request@8867c4aba1b742c39f8d0ba35429c2dfa4b6cb20 #v7.0.1
uses: peter-evans/create-pull-request@5e914681df9dc83aa4e4905692ca88beb2f9e91f #v7.0.5
with:
token: ${{ steps.generate-token.outputs.token }}
author: OSS Updater <noreply+oss-updater@tailscale.com>

View File

@@ -100,7 +100,7 @@ publishdevoperator: ## Build and publish k8s-operator image to location specifie
@test "${REPO}" != "ghcr.io/tailscale/tailscale" || (echo "REPO=... must not be ghcr.io/tailscale/tailscale" && exit 1)
@test "${REPO}" != "tailscale/k8s-operator" || (echo "REPO=... must not be tailscale/k8s-operator" && exit 1)
@test "${REPO}" != "ghcr.io/tailscale/k8s-operator" || (echo "REPO=... must not be ghcr.io/tailscale/k8s-operator" && exit 1)
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=operator ./build_docker.sh
TAGS="${TAGS}" REPOS=${REPO} PLATFORM=${PLATFORM} PUSH=true TARGET=k8s-operator ./build_docker.sh
publishdevnameserver: ## Build and publish k8s-nameserver image to location specified by ${REPO}
@test -n "${REPO}" || (echo "REPO=... required; e.g. REPO=ghcr.io/${USER}/tailscale" && exit 1)
@@ -116,7 +116,6 @@ sshintegrationtest: ## Run the SSH integration tests in various Docker container
GOOS=linux GOARCH=amd64 ./tool/go build -o ssh/tailssh/testcontainers/tailscaled ./cmd/tailscaled && \
echo "Testing on ubuntu:focal" && docker build --build-arg="BASE=ubuntu:focal" -t ssh-ubuntu-focal ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:jammy" && docker build --build-arg="BASE=ubuntu:jammy" -t ssh-ubuntu-jammy ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:mantic" && docker build --build-arg="BASE=ubuntu:mantic" -t ssh-ubuntu-mantic ssh/tailssh/testcontainers && \
echo "Testing on ubuntu:noble" && docker build --build-arg="BASE=ubuntu:noble" -t ssh-ubuntu-noble ssh/tailssh/testcontainers && \
echo "Testing on alpine:latest" && docker build --build-arg="BASE=alpine:latest" -t ssh-alpine-latest ssh/tailssh/testcontainers

View File

@@ -72,7 +72,7 @@ Origin](https://en.wikipedia.org/wiki/Developer_Certificate_of_Origin)
`Signed-off-by` lines in commits.
See `git log` for our commit message style. It's basically the same as
[Go's style](https://github.com/golang/go/wiki/CommitMessage).
[Go's style](https://go.dev/wiki/CommitMessage).
## About Us

View File

@@ -1 +1 @@
1.75.0
1.79.0

View File

@@ -18,7 +18,6 @@ import (
"sync"
"time"
xmaps "golang.org/x/exp/maps"
"golang.org/x/net/dns/dnsmessage"
"tailscale.com/types/logger"
"tailscale.com/types/views"
@@ -291,11 +290,11 @@ func (e *AppConnector) updateDomains(domains []string) {
}
}
if err := e.routeAdvertiser.UnadvertiseRoute(toRemove...); err != nil {
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", xmaps.Keys(oldDomains), toRemove, err)
e.logf("failed to unadvertise routes on domain removal: %v: %v: %v", slicesx.MapKeys(oldDomains), toRemove, err)
}
}
e.logf("handling domains: %v and wildcards: %v", xmaps.Keys(e.domains), e.wildcards)
e.logf("handling domains: %v and wildcards: %v", slicesx.MapKeys(e.domains), e.wildcards)
}
// updateRoutes merges the supplied routes into the currently configured routes. The routes supplied
@@ -354,7 +353,7 @@ func (e *AppConnector) Domains() views.Slice[string] {
e.mu.Lock()
defer e.mu.Unlock()
return views.SliceOf(xmaps.Keys(e.domains))
return views.SliceOf(slicesx.MapKeys(e.domains))
}
// DomainRoutes returns a map of domains to resolved IP

View File

@@ -11,13 +11,13 @@ import (
"testing"
"time"
xmaps "golang.org/x/exp/maps"
"golang.org/x/net/dns/dnsmessage"
"tailscale.com/appc/appctest"
"tailscale.com/tstest"
"tailscale.com/util/clientmetric"
"tailscale.com/util/mak"
"tailscale.com/util/must"
"tailscale.com/util/slicesx"
)
func fakeStoreRoutes(*RouteInfo) error { return nil }
@@ -50,7 +50,7 @@ func TestUpdateDomains(t *testing.T) {
// domains are explicitly downcased on set.
a.UpdateDomains([]string{"UP.EXAMPLE.COM"})
a.Wait(ctx)
if got, want := xmaps.Keys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
if got, want := slicesx.MapKeys(a.domains), []string{"up.example.com"}; !slices.Equal(got, want) {
t.Errorf("got %v; want %v", got, want)
}
}

View File

@@ -0,0 +1,27 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build tailscale_go
package tailscaleroot
import (
"fmt"
"os"
"strings"
)
func init() {
tsRev, ok := tailscaleToolchainRev()
if !ok {
panic("binary built with tailscale_go build tag but failed to read build info or find tailscale.toolchain.rev in build info")
}
want := strings.TrimSpace(GoToolchainRev)
if tsRev != want {
if os.Getenv("TS_PERMIT_TOOLCHAIN_MISMATCH") == "1" {
fmt.Fprintf(os.Stderr, "tailscale.toolchain.rev = %q, want %q; but ignoring due to TS_PERMIT_TOOLCHAIN_MISMATCH=1\n", tsRev, want)
return
}
panic(fmt.Sprintf("binary built with tailscale_go build tag but Go toolchain %q doesn't match github.com/tailscale/tailscale expected value %q; override this failure with TS_PERMIT_TOOLCHAIN_MISMATCH=1", tsRev, want))
}
}

View File

@@ -17,12 +17,20 @@ eval "$(./build_dist.sh shellvars)"
DEFAULT_TARGET="client"
DEFAULT_TAGS="v${VERSION_SHORT},v${VERSION_MINOR}"
DEFAULT_BASE="tailscale/alpine-base:3.18"
# Set a few pre-defined OCI annotations. The source annotation is used by tools such as Renovate that scan the linked
# Github repo to find release notes for any new image tags. Note that for official Tailscale images the default
# annotations defined here will be overriden by release scripts that call this script.
# https://github.com/opencontainers/image-spec/blob/main/annotations.md#pre-defined-annotation-keys
DEFAULT_ANNOTATIONS="org.opencontainers.image.source=https://github.com/tailscale/tailscale/blob/main/build_docker.sh,org.opencontainers.image.vendor=Tailscale"
PUSH="${PUSH:-false}"
TARGET="${TARGET:-${DEFAULT_TARGET}}"
TAGS="${TAGS:-${DEFAULT_TAGS}}"
BASE="${BASE:-${DEFAULT_BASE}}"
PLATFORM="${PLATFORM:-}" # default to all platforms
# OCI annotations that will be added to the image.
# https://github.com/opencontainers/image-spec/blob/main/annotations.md
ANNOTATIONS="${ANNOTATIONS:-${DEFAULT_ANNOTATIONS}}"
case "$TARGET" in
client)
@@ -43,9 +51,10 @@ case "$TARGET" in
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \
/usr/local/bin/containerboot
;;
operator)
k8s-operator)
DEFAULT_REPOS="tailscale/k8s-operator"
REPOS="${REPOS:-${DEFAULT_REPOS}}"
go run github.com/tailscale/mkctr \
@@ -56,9 +65,11 @@ case "$TARGET" in
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--gotags="ts_kube,ts_package_container" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \
/usr/local/bin/operator
;;
k8s-nameserver)
@@ -72,9 +83,11 @@ case "$TARGET" in
-X tailscale.com/version.gitCommitStamp=${VERSION_GIT_HASH}" \
--base="${BASE}" \
--tags="${TAGS}" \
--gotags="ts_kube,ts_package_container" \
--repos="${REPOS}" \
--push="${PUSH}" \
--target="${PLATFORM}" \
--annotations="${ANNOTATIONS}" \
/usr/local/bin/k8s-nameserver
;;
*)

319
client/systray/logo.go Normal file
View File

@@ -0,0 +1,319 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
package systray
import (
"bytes"
"context"
"image"
"image/color"
"image/png"
"sync"
"time"
"fyne.io/systray"
"github.com/fogleman/gg"
)
// tsLogo represents the Tailscale logo displayed as the systray icon.
type tsLogo struct {
// dots represents the state of the 3x3 dot grid in the logo.
// A 0 represents a gray dot, any other value is a white dot.
dots [9]byte
// dotMask returns an image mask to be used when rendering the logo dots.
dotMask func(dc *gg.Context, borderUnits int, radius int) *image.Alpha
// overlay is called after the dots are rendered to draw an additional overlay.
overlay func(dc *gg.Context, borderUnits int, radius int)
}
var (
// disconnected is all gray dots
disconnected = tsLogo{dots: [9]byte{
0, 0, 0,
0, 0, 0,
0, 0, 0,
}}
// connected is the normal Tailscale logo
connected = tsLogo{dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 1, 0,
}}
// loading is a special tsLogo value that is not meant to be rendered directly,
// but indicates that the loading animation should be shown.
loading = tsLogo{dots: [9]byte{'l', 'o', 'a', 'd', 'i', 'n', 'g'}}
// loadingIcons are shown in sequence as an animated loading icon.
loadingLogos = []tsLogo{
{dots: [9]byte{
0, 1, 1,
1, 0, 1,
0, 0, 1,
}},
{dots: [9]byte{
0, 1, 1,
0, 0, 1,
0, 1, 0,
}},
{dots: [9]byte{
0, 1, 1,
0, 0, 0,
0, 0, 1,
}},
{dots: [9]byte{
0, 0, 1,
0, 1, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 1, 0,
0, 0, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 1,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 1,
0, 0, 0,
0, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 0,
1, 0, 0,
}},
{dots: [9]byte{
0, 0, 0,
0, 0, 0,
1, 1, 0,
}},
{dots: [9]byte{
0, 0, 0,
1, 0, 0,
1, 1, 0,
}},
{dots: [9]byte{
0, 0, 0,
1, 1, 0,
0, 1, 0,
}},
{dots: [9]byte{
0, 0, 0,
1, 1, 0,
0, 1, 1,
}},
{dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 0, 1,
}},
{dots: [9]byte{
0, 1, 0,
0, 1, 1,
1, 0, 1,
}},
}
// exitNodeOnline is the Tailscale logo with an additional arrow overlay in the corner.
exitNodeOnline = tsLogo{
dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 1, 0,
},
// draw an arrow mask in the bottom right corner with a reasonably thick line width.
dotMask: func(dc *gg.Context, borderUnits int, radius int) *image.Alpha {
bu, r := float64(borderUnits), float64(radius)
x1 := r * (bu + 3.5)
y := r * (bu + 7)
x2 := x1 + (r * 5)
mc := gg.NewContext(dc.Width(), dc.Height())
mc.DrawLine(x1, y, x2, y) // arrow center line
mc.DrawLine(x2-(1.5*r), y-(1.5*r), x2, y) // top of arrow tip
mc.DrawLine(x2-(1.5*r), y+(1.5*r), x2, y) // bottom of arrow tip
mc.SetLineWidth(r * 3)
mc.Stroke()
return mc.AsMask()
},
// draw an arrow in the bottom right corner over the masked area.
overlay: func(dc *gg.Context, borderUnits int, radius int) {
bu, r := float64(borderUnits), float64(radius)
x1 := r * (bu + 3.5)
y := r * (bu + 7)
x2 := x1 + (r * 5)
dc.DrawLine(x1, y, x2, y) // arrow center line
dc.DrawLine(x2-(1.5*r), y-(1.5*r), x2, y) // top of arrow tip
dc.DrawLine(x2-(1.5*r), y+(1.5*r), x2, y) // bottom of arrow tip
dc.SetColor(fg)
dc.SetLineWidth(r)
dc.Stroke()
},
}
// exitNodeOffline is the Tailscale logo with a red "x" in the corner.
exitNodeOffline = tsLogo{
dots: [9]byte{
0, 0, 0,
1, 1, 1,
0, 1, 0,
},
// Draw a square that hides the four dots in the bottom right corner,
dotMask: func(dc *gg.Context, borderUnits int, radius int) *image.Alpha {
bu, r := float64(borderUnits), float64(radius)
x := r * (bu + 3)
mc := gg.NewContext(dc.Width(), dc.Height())
mc.DrawRectangle(x, x, r*6, r*6)
mc.Fill()
return mc.AsMask()
},
// draw a red "x" over the bottom right corner.
overlay: func(dc *gg.Context, borderUnits int, radius int) {
bu, r := float64(borderUnits), float64(radius)
x1 := r * (bu + 4)
x2 := x1 + (r * 3.5)
dc.DrawLine(x1, x1, x2, x2) // top-left to bottom-right stroke
dc.DrawLine(x1, x2, x2, x1) // bottom-left to top-right stroke
dc.SetColor(red)
dc.SetLineWidth(r)
dc.Stroke()
},
}
)
var (
bg = color.NRGBA{0, 0, 0, 255}
fg = color.NRGBA{255, 255, 255, 255}
gray = color.NRGBA{255, 255, 255, 102}
red = color.NRGBA{229, 111, 74, 255}
)
// render returns a PNG image of the logo.
func (logo tsLogo) render() *bytes.Buffer {
const borderUnits = 1
return logo.renderWithBorder(borderUnits)
}
// renderWithBorder returns a PNG image of the logo with the specified border width.
// One border unit is equal to the radius of a tailscale logo dot.
func (logo tsLogo) renderWithBorder(borderUnits int) *bytes.Buffer {
const radius = 25
dim := radius * (8 + borderUnits*2)
dc := gg.NewContext(dim, dim)
dc.DrawRectangle(0, 0, float64(dim), float64(dim))
dc.SetColor(bg)
dc.Fill()
if logo.dotMask != nil {
mask := logo.dotMask(dc, borderUnits, radius)
dc.SetMask(mask)
dc.InvertMask()
}
for y := 0; y < 3; y++ {
for x := 0; x < 3; x++ {
px := (borderUnits + 1 + 3*x) * radius
py := (borderUnits + 1 + 3*y) * radius
col := fg
if logo.dots[y*3+x] == 0 {
col = gray
}
dc.DrawCircle(float64(px), float64(py), radius)
dc.SetColor(col)
dc.Fill()
}
}
if logo.overlay != nil {
dc.ResetClip()
logo.overlay(dc, borderUnits, radius)
}
b := bytes.NewBuffer(nil)
png.Encode(b, dc.Image())
return b
}
// setAppIcon renders logo and sets it as the systray icon.
func setAppIcon(icon tsLogo) {
if icon.dots == loading.dots {
startLoadingAnimation()
} else {
stopLoadingAnimation()
systray.SetIcon(icon.render().Bytes())
}
}
var (
loadingMu sync.Mutex // protects loadingCancel
// loadingCancel stops the loading animation in the systray icon.
// This is nil if the animation is not currently active.
loadingCancel func()
)
// startLoadingAnimation starts the animated loading icon in the system tray.
// The animation continues until [stopLoadingAnimation] is called.
// If the loading animation is already active, this func does nothing.
func startLoadingAnimation() {
loadingMu.Lock()
defer loadingMu.Unlock()
if loadingCancel != nil {
// loading icon already displayed
return
}
ctx := context.Background()
ctx, loadingCancel = context.WithCancel(ctx)
go func() {
t := time.NewTicker(500 * time.Millisecond)
var i int
for {
select {
case <-ctx.Done():
return
case <-t.C:
systray.SetIcon(loadingLogos[i].render().Bytes())
i++
if i >= len(loadingLogos) {
i = 0
}
}
}
}()
}
// stopLoadingAnimation stops the animated loading icon in the system tray.
// If the loading animation is not currently active, this func does nothing.
func stopLoadingAnimation() {
loadingMu.Lock()
defer loadingMu.Unlock()
if loadingCancel != nil {
loadingCancel()
loadingCancel = nil
}
}

712
client/systray/systray.go Normal file
View File

@@ -0,0 +1,712 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
// Package systray provides a minimal Tailscale systray application.
package systray
import (
"context"
"errors"
"fmt"
"io"
"log"
"net/http"
"os"
"os/signal"
"runtime"
"slices"
"strings"
"sync"
"syscall"
"time"
"fyne.io/systray"
"github.com/atotto/clipboard"
dbus "github.com/godbus/dbus/v5"
"github.com/toqueteos/webbrowser"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
"tailscale.com/tailcfg"
"tailscale.com/util/slicesx"
"tailscale.com/util/stringsx"
)
var (
// newMenuDelay is the amount of time to sleep after creating a new menu,
// but before adding items to it. This works around a bug in some dbus implementations.
newMenuDelay time.Duration
// if true, treat all mullvad exit node countries as single-city.
// Instead of rendering a submenu with cities, just select the highest-priority peer.
hideMullvadCities bool
)
// Run starts the systray menu and blocks until the menu exits.
func (menu *Menu) Run() {
menu.updateState()
// exit cleanly on SIGINT and SIGTERM
go func() {
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, syscall.SIGINT, syscall.SIGTERM)
select {
case <-interrupt:
menu.onExit()
case <-menu.bgCtx.Done():
}
}()
go menu.lc.IncrementCounter(menu.bgCtx, "systray_start", 1)
systray.Run(menu.onReady, menu.onExit)
}
// Menu represents the systray menu, its items, and the current Tailscale state.
type Menu struct {
mu sync.Mutex // protects the entire Menu
lc tailscale.LocalClient
status *ipnstate.Status
curProfile ipn.LoginProfile
allProfiles []ipn.LoginProfile
bgCtx context.Context // ctx for background tasks not involving menu item clicks
bgCancel context.CancelFunc
// Top-level menu items
connect *systray.MenuItem
disconnect *systray.MenuItem
self *systray.MenuItem
exitNodes *systray.MenuItem
more *systray.MenuItem
quit *systray.MenuItem
rebuildCh chan struct{} // triggers a menu rebuild
accountsCh chan ipn.ProfileID
exitNodeCh chan tailcfg.StableNodeID // ID of selected exit node
eventCancel context.CancelFunc // cancel eventLoop
notificationIcon *os.File // icon used for desktop notifications
}
func (menu *Menu) init() {
if menu.bgCtx != nil {
// already initialized
return
}
menu.rebuildCh = make(chan struct{}, 1)
menu.accountsCh = make(chan ipn.ProfileID)
menu.exitNodeCh = make(chan tailcfg.StableNodeID)
// dbus wants a file path for notification icons, so copy to a temp file.
menu.notificationIcon, _ = os.CreateTemp("", "tailscale-systray.png")
io.Copy(menu.notificationIcon, connected.renderWithBorder(3))
menu.bgCtx, menu.bgCancel = context.WithCancel(context.Background())
go menu.watchIPNBus()
}
func init() {
if runtime.GOOS != "linux" {
// so far, these tweaks are only needed on Linux
return
}
desktop := strings.ToLower(os.Getenv("XDG_CURRENT_DESKTOP"))
switch desktop {
case "gnome":
// GNOME expands submenus downward in the main menu, rather than flyouts to the side.
// Either as a result of that or another limitation, there seems to be a maximum depth of submenus.
// Mullvad countries that have a city submenu are not being rendered, and so can't be selected.
// Handle this by simply treating all mullvad countries as single-city and select the best peer.
hideMullvadCities = true
case "kde":
// KDE doesn't need a delay, and actually won't render submenus
// if we delay for more than about 400µs.
newMenuDelay = 0
default:
// Add a slight delay to ensure the menu is created before adding items.
//
// Systray implementations that use libdbusmenu sometimes process messages out of order,
// resulting in errors such as:
// (waybar:153009): LIBDBUSMENU-GTK-WARNING **: 18:07:11.551: Children but no menu, someone's been naughty with their 'children-display' property: 'submenu'
//
// See also: https://github.com/fyne-io/systray/issues/12
newMenuDelay = 10 * time.Millisecond
}
}
// onReady is called by the systray package when the menu is ready to be built.
func (menu *Menu) onReady() {
log.Printf("starting")
setAppIcon(disconnected)
menu.rebuild()
}
// updateState updates the Menu state from the Tailscale local client.
func (menu *Menu) updateState() {
menu.mu.Lock()
defer menu.mu.Unlock()
menu.init()
var err error
menu.status, err = menu.lc.Status(menu.bgCtx)
if err != nil {
log.Print(err)
}
menu.curProfile, menu.allProfiles, err = menu.lc.ProfileStatus(menu.bgCtx)
if err != nil {
log.Print(err)
}
}
// rebuild the systray menu based on the current Tailscale state.
//
// We currently rebuild the entire menu because it is not easy to update the existing menu.
// You cannot iterate over the items in a menu, nor can you remove some items like separators.
// So for now we rebuild the whole thing, and can optimize this later if needed.
func (menu *Menu) rebuild() {
menu.mu.Lock()
defer menu.mu.Unlock()
menu.init()
if menu.eventCancel != nil {
menu.eventCancel()
}
ctx := context.Background()
ctx, menu.eventCancel = context.WithCancel(ctx)
systray.ResetMenu()
menu.connect = systray.AddMenuItem("Connect", "")
menu.disconnect = systray.AddMenuItem("Disconnect", "")
menu.disconnect.Hide()
systray.AddSeparator()
// delay to prevent race setting icon on first start
time.Sleep(newMenuDelay)
// Set systray menu icon and title.
// Also adjust connect/disconnect menu items if needed.
var backendState string
if menu.status != nil {
backendState = menu.status.BackendState
}
switch backendState {
case ipn.Running.String():
if menu.status.ExitNodeStatus != nil && !menu.status.ExitNodeStatus.ID.IsZero() {
if menu.status.ExitNodeStatus.Online {
setTooltip("Using exit node")
setAppIcon(exitNodeOnline)
} else {
setTooltip("Exit node offline")
setAppIcon(exitNodeOffline)
}
} else {
setTooltip(fmt.Sprintf("Connected to %s", menu.status.CurrentTailnet.Name))
setAppIcon(connected)
}
menu.connect.SetTitle("Connected")
menu.connect.Disable()
menu.disconnect.Show()
menu.disconnect.Enable()
case ipn.Starting.String():
setTooltip("Connecting")
setAppIcon(loading)
default:
setTooltip("Disconnected")
setAppIcon(disconnected)
}
account := "Account"
if pt := profileTitle(menu.curProfile); pt != "" {
account = pt
}
accounts := systray.AddMenuItem(account, "")
setRemoteIcon(accounts, menu.curProfile.UserProfile.ProfilePicURL)
time.Sleep(newMenuDelay)
for _, profile := range menu.allProfiles {
title := profileTitle(profile)
var item *systray.MenuItem
if profile.ID == menu.curProfile.ID {
item = accounts.AddSubMenuItemCheckbox(title, "", true)
} else {
item = accounts.AddSubMenuItem(title, "")
}
setRemoteIcon(item, profile.UserProfile.ProfilePicURL)
onClick(ctx, item, func(ctx context.Context) {
select {
case <-ctx.Done():
case menu.accountsCh <- profile.ID:
}
})
}
if menu.status != nil && menu.status.Self != nil && len(menu.status.Self.TailscaleIPs) > 0 {
title := fmt.Sprintf("This Device: %s (%s)", menu.status.Self.HostName, menu.status.Self.TailscaleIPs[0])
menu.self = systray.AddMenuItem(title, "")
} else {
menu.self = systray.AddMenuItem("This Device: not connected", "")
menu.self.Disable()
}
systray.AddSeparator()
menu.rebuildExitNodeMenu(ctx)
if menu.status != nil {
menu.more = systray.AddMenuItem("More settings", "")
onClick(ctx, menu.more, func(_ context.Context) {
webbrowser.Open("http://100.100.100.100/")
})
}
menu.quit = systray.AddMenuItem("Quit", "Quit the app")
menu.quit.Enable()
go menu.eventLoop(ctx)
}
// profileTitle returns the title string for a profile menu item.
func profileTitle(profile ipn.LoginProfile) string {
title := profile.Name
if profile.NetworkProfile.DomainName != "" {
if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
// windows and mac don't support multi-line menu
title += " (" + profile.NetworkProfile.DomainName + ")"
} else {
title += "\n" + profile.NetworkProfile.DomainName
}
}
return title
}
var (
cacheMu sync.Mutex
httpCache = map[string][]byte{} // URL => response body
)
// setRemoteIcon sets the icon for menu to the specified remote image.
// Remote images are fetched as needed and cached.
func setRemoteIcon(menu *systray.MenuItem, urlStr string) {
if menu == nil || urlStr == "" {
return
}
cacheMu.Lock()
b, ok := httpCache[urlStr]
if !ok {
resp, err := http.Get(urlStr)
if err == nil && resp.StatusCode == http.StatusOK {
b, _ = io.ReadAll(resp.Body)
httpCache[urlStr] = b
resp.Body.Close()
}
}
cacheMu.Unlock()
if len(b) > 0 {
menu.SetIcon(b)
}
}
// setTooltip sets the tooltip text for the systray icon.
func setTooltip(text string) {
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
systray.SetTooltip(text)
} else {
// on Linux, SetTitle actually sets the tooltip
systray.SetTitle(text)
}
}
// eventLoop is the main event loop for handling click events on menu items
// and responding to Tailscale state changes.
// This method does not return until ctx.Done is closed.
func (menu *Menu) eventLoop(ctx context.Context) {
for {
select {
case <-ctx.Done():
return
case <-menu.rebuildCh:
menu.updateState()
menu.rebuild()
case <-menu.connect.ClickedCh:
_, err := menu.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
WantRunning: true,
},
WantRunningSet: true,
})
if err != nil {
log.Printf("error connecting: %v", err)
}
case <-menu.disconnect.ClickedCh:
_, err := menu.lc.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
WantRunning: false,
},
WantRunningSet: true,
})
if err != nil {
log.Printf("error disconnecting: %v", err)
}
case <-menu.self.ClickedCh:
menu.copyTailscaleIP(menu.status.Self)
case id := <-menu.accountsCh:
if err := menu.lc.SwitchProfile(ctx, id); err != nil {
log.Printf("error switching to profile ID %v: %v", id, err)
}
case exitNode := <-menu.exitNodeCh:
if exitNode.IsZero() {
log.Print("disable exit node")
if err := menu.lc.SetUseExitNode(ctx, false); err != nil {
log.Printf("error disabling exit node: %v", err)
}
} else {
log.Printf("enable exit node: %v", exitNode)
mp := &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
ExitNodeID: exitNode,
},
ExitNodeIDSet: true,
}
if _, err := menu.lc.EditPrefs(ctx, mp); err != nil {
log.Printf("error setting exit node: %v", err)
}
}
case <-menu.quit.ClickedCh:
systray.Quit()
}
}
}
// onClick registers a click handler for a menu item.
func onClick(ctx context.Context, item *systray.MenuItem, fn func(ctx context.Context)) {
go func() {
for {
select {
case <-ctx.Done():
return
case <-item.ClickedCh:
fn(ctx)
}
}
}()
}
// watchIPNBus subscribes to the tailscale event bus and sends state updates to chState.
// This method does not return.
func (menu *Menu) watchIPNBus() {
for {
if err := menu.watchIPNBusInner(); err != nil {
log.Println(err)
if errors.Is(err, context.Canceled) {
// If the context got canceled, we will never be able to
// reconnect to IPN bus, so exit the process.
log.Fatalf("watchIPNBus: %v", err)
}
}
// If our watch connection breaks, wait a bit before reconnecting. No
// reason to spam the logs if e.g. tailscaled is restarting or goes
// down.
time.Sleep(3 * time.Second)
}
}
func (menu *Menu) watchIPNBusInner() error {
watcher, err := menu.lc.WatchIPNBus(menu.bgCtx, ipn.NotifyNoPrivateKeys)
if err != nil {
return fmt.Errorf("watching ipn bus: %w", err)
}
defer watcher.Close()
for {
select {
case <-menu.bgCtx.Done():
return nil
default:
n, err := watcher.Next()
if err != nil {
return fmt.Errorf("ipnbus error: %w", err)
}
var rebuild bool
if n.State != nil {
log.Printf("new state: %v", n.State)
rebuild = true
}
if n.Prefs != nil {
rebuild = true
}
if rebuild {
menu.rebuildCh <- struct{}{}
}
}
}
}
// copyTailscaleIP copies the first Tailscale IP of the given device to the clipboard
// and sends a notification with the copied value.
func (menu *Menu) copyTailscaleIP(device *ipnstate.PeerStatus) {
if device == nil || len(device.TailscaleIPs) == 0 {
return
}
name := strings.Split(device.DNSName, ".")[0]
ip := device.TailscaleIPs[0].String()
err := clipboard.WriteAll(ip)
if err != nil {
log.Printf("clipboard error: %v", err)
}
menu.sendNotification(fmt.Sprintf("Copied Address for %v", name), ip)
}
// sendNotification sends a desktop notification with the given title and content.
func (menu *Menu) sendNotification(title, content string) {
conn, err := dbus.SessionBus()
if err != nil {
log.Printf("dbus: %v", err)
return
}
timeout := 3 * time.Second
obj := conn.Object("org.freedesktop.Notifications", "/org/freedesktop/Notifications")
call := obj.Call("org.freedesktop.Notifications.Notify", 0, "Tailscale", uint32(0),
menu.notificationIcon.Name(), title, content, []string{}, map[string]dbus.Variant{}, int32(timeout.Milliseconds()))
if call.Err != nil {
log.Printf("dbus: %v", call.Err)
}
}
func (menu *Menu) rebuildExitNodeMenu(ctx context.Context) {
if menu.status == nil {
return
}
status := menu.status
menu.exitNodes = systray.AddMenuItem("Exit Nodes", "")
time.Sleep(newMenuDelay)
// register a click handler for a menu item to set nodeID as the exit node.
setExitNodeOnClick := func(item *systray.MenuItem, nodeID tailcfg.StableNodeID) {
onClick(ctx, item, func(ctx context.Context) {
select {
case <-ctx.Done():
case menu.exitNodeCh <- nodeID:
}
})
}
noExitNodeMenu := menu.exitNodes.AddSubMenuItemCheckbox("None", "", status.ExitNodeStatus == nil)
setExitNodeOnClick(noExitNodeMenu, "")
// Show recommended exit node if available.
if status.Self.CapMap.Contains(tailcfg.NodeAttrSuggestExitNodeUI) {
sugg, err := menu.lc.SuggestExitNode(ctx)
if err == nil {
title := "Recommended: "
if loc := sugg.Location; loc.Valid() && loc.Country() != "" {
flag := countryFlag(loc.CountryCode())
title += fmt.Sprintf("%s %s: %s", flag, loc.Country(), loc.City())
} else {
title += strings.Split(sugg.Name, ".")[0]
}
menu.exitNodes.AddSeparator()
rm := menu.exitNodes.AddSubMenuItemCheckbox(title, "", false)
setExitNodeOnClick(rm, sugg.ID)
if status.ExitNodeStatus != nil && sugg.ID == status.ExitNodeStatus.ID {
rm.Check()
}
}
}
// Add tailnet exit nodes if present.
var tailnetExitNodes []*ipnstate.PeerStatus
for _, ps := range status.Peer {
if ps.ExitNodeOption && ps.Location == nil {
tailnetExitNodes = append(tailnetExitNodes, ps)
}
}
if len(tailnetExitNodes) > 0 {
menu.exitNodes.AddSeparator()
menu.exitNodes.AddSubMenuItem("Tailnet Exit Nodes", "").Disable()
for _, ps := range status.Peer {
if !ps.ExitNodeOption || ps.Location != nil {
continue
}
name := strings.Split(ps.DNSName, ".")[0]
if !ps.Online {
name += " (offline)"
}
sm := menu.exitNodes.AddSubMenuItemCheckbox(name, "", false)
if !ps.Online {
sm.Disable()
}
if status.ExitNodeStatus != nil && ps.ID == status.ExitNodeStatus.ID {
sm.Check()
}
setExitNodeOnClick(sm, ps.ID)
}
}
// Add mullvad exit nodes if present.
var mullvadExitNodes mullvadPeers
if status.Self.CapMap.Contains("mullvad") {
mullvadExitNodes = newMullvadPeers(status)
}
if len(mullvadExitNodes.countries) > 0 {
menu.exitNodes.AddSeparator()
menu.exitNodes.AddSubMenuItem("Location-based Exit Nodes", "").Disable()
mullvadMenu := menu.exitNodes.AddSubMenuItemCheckbox("Mullvad VPN", "", false)
for _, country := range mullvadExitNodes.sortedCountries() {
flag := countryFlag(country.code)
countryMenu := mullvadMenu.AddSubMenuItemCheckbox(flag+" "+country.name, "", false)
// single-city country, no submenu
if len(country.cities) == 1 || hideMullvadCities {
setExitNodeOnClick(countryMenu, country.best.ID)
if status.ExitNodeStatus != nil {
for _, city := range country.cities {
for _, ps := range city.peers {
if status.ExitNodeStatus.ID == ps.ID {
mullvadMenu.Check()
countryMenu.Check()
}
}
}
}
continue
}
// multi-city country, build submenu with "best available" option and cities.
time.Sleep(newMenuDelay)
bm := countryMenu.AddSubMenuItemCheckbox("Best Available", "", false)
setExitNodeOnClick(bm, country.best.ID)
countryMenu.AddSeparator()
for _, city := range country.sortedCities() {
cityMenu := countryMenu.AddSubMenuItemCheckbox(city.name, "", false)
setExitNodeOnClick(cityMenu, city.best.ID)
if status.ExitNodeStatus != nil {
for _, ps := range city.peers {
if status.ExitNodeStatus.ID == ps.ID {
mullvadMenu.Check()
countryMenu.Check()
cityMenu.Check()
}
}
}
}
}
}
// TODO: "Allow Local Network Access" and "Run Exit Node" menu items
}
// mullvadPeers contains all mullvad peer nodes, sorted by country and city.
type mullvadPeers struct {
countries map[string]*mvCountry // country code (uppercase) => country
}
// sortedCountries returns countries containing mullvad nodes, sorted by name.
func (mp mullvadPeers) sortedCountries() []*mvCountry {
countries := slicesx.MapValues(mp.countries)
slices.SortFunc(countries, func(a, b *mvCountry) int {
return stringsx.CompareFold(a.name, b.name)
})
return countries
}
type mvCountry struct {
code string
name string
best *ipnstate.PeerStatus // highest priority peer in the country
cities map[string]*mvCity // city code => city
}
// sortedCities returns cities containing mullvad nodes, sorted by name.
func (mc *mvCountry) sortedCities() []*mvCity {
cities := slicesx.MapValues(mc.cities)
slices.SortFunc(cities, func(a, b *mvCity) int {
return stringsx.CompareFold(a.name, b.name)
})
return cities
}
// countryFlag takes a 2-character ASCII string and returns the corresponding emoji flag.
// It returns the empty string on error.
func countryFlag(code string) string {
if len(code) != 2 {
return ""
}
runes := make([]rune, 0, 2)
for i := range 2 {
b := code[i] | 32 // lowercase
if b < 'a' || b > 'z' {
return ""
}
// https://en.wikipedia.org/wiki/Regional_indicator_symbol
runes = append(runes, 0x1F1E6+rune(b-'a'))
}
return string(runes)
}
type mvCity struct {
name string
best *ipnstate.PeerStatus // highest priority peer in the city
peers []*ipnstate.PeerStatus
}
func newMullvadPeers(status *ipnstate.Status) mullvadPeers {
countries := make(map[string]*mvCountry)
for _, ps := range status.Peer {
if !ps.ExitNodeOption || ps.Location == nil {
continue
}
loc := ps.Location
country, ok := countries[loc.CountryCode]
if !ok {
country = &mvCountry{
code: loc.CountryCode,
name: loc.Country,
cities: make(map[string]*mvCity),
}
countries[loc.CountryCode] = country
}
city, ok := countries[loc.CountryCode].cities[loc.CityCode]
if !ok {
city = &mvCity{
name: loc.City,
}
countries[loc.CountryCode].cities[loc.CityCode] = city
}
city.peers = append(city.peers, ps)
if city.best == nil || ps.Location.Priority > city.best.Location.Priority {
city.best = ps
}
if country.best == nil || ps.Location.Priority > country.best.Location.Priority {
country.best = ps
}
}
return mullvadPeers{countries}
}
// onExit is called by the systray package when the menu is exiting.
func (menu *Menu) onExit() {
log.Printf("exiting")
if menu.bgCancel != nil {
menu.bgCancel()
}
if menu.eventCancel != nil {
menu.eventCancel()
}
os.Remove(menu.notificationIcon.Name())
}

View File

@@ -1,7 +1,7 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build go1.19
//go:build go1.22
package tailscale
@@ -40,6 +40,7 @@ import (
"tailscale.com/types/dnstype"
"tailscale.com/types/key"
"tailscale.com/types/tkatype"
"tailscale.com/util/syspolicy/setting"
)
// defaultLocalClient is the default LocalClient when using the legacy
@@ -492,6 +493,17 @@ func (lc *LocalClient) DebugAction(ctx context.Context, action string) error {
return nil
}
// DebugActionBody invokes a debug action with a body parameter, such as
// "debug-force-prefer-derp".
// These are development tools and subject to change or removal over time.
func (lc *LocalClient) DebugActionBody(ctx context.Context, action string, rbody io.Reader) error {
body, err := lc.send(ctx, "POST", "/localapi/v0/debug?action="+url.QueryEscape(action), 200, rbody)
if err != nil {
return fmt.Errorf("error %w: %s", err, body)
}
return nil
}
// DebugResultJSON invokes a debug action and returns its result as something JSON-able.
// These are development tools and subject to change or removal over time.
func (lc *LocalClient) DebugResultJSON(ctx context.Context, action string) (any, error) {
@@ -814,6 +826,33 @@ func (lc *LocalClient) EditPrefs(ctx context.Context, mp *ipn.MaskedPrefs) (*ipn
return decodeJSON[*ipn.Prefs](body)
}
// GetEffectivePolicy returns the effective policy for the specified scope.
func (lc *LocalClient) GetEffectivePolicy(ctx context.Context, scope setting.PolicyScope) (*setting.Snapshot, error) {
scopeID, err := scope.MarshalText()
if err != nil {
return nil, err
}
body, err := lc.get200(ctx, "/localapi/v0/policy/"+string(scopeID))
if err != nil {
return nil, err
}
return decodeJSON[*setting.Snapshot](body)
}
// ReloadEffectivePolicy reloads the effective policy for the specified scope
// by reading and merging policy settings from all applicable policy sources.
func (lc *LocalClient) ReloadEffectivePolicy(ctx context.Context, scope setting.PolicyScope) (*setting.Snapshot, error) {
scopeID, err := scope.MarshalText()
if err != nil {
return nil, err
}
body, err := lc.send(ctx, "POST", "/localapi/v0/policy/"+string(scopeID), 200, http.NoBody)
if err != nil {
return nil, err
}
return decodeJSON[*setting.Snapshot](body)
}
// GetDNSOSConfig returns the system DNS configuration for the current device.
// That is, it returns the DNS configuration that the system would use if Tailscale weren't being used.
func (lc *LocalClient) GetDNSOSConfig(ctx context.Context) (*apitype.DNSOSConfig, error) {
@@ -1299,6 +1338,17 @@ func (lc *LocalClient) SetServeConfig(ctx context.Context, config *ipn.ServeConf
return nil
}
// DisconnectControl shuts down all connections to control, thus making control consider this node inactive. This can be
// run on HA subnet router or app connector replicas before shutting them down to ensure peers get told to switch over
// to another replica whilst there is still some grace period for the existing connections to terminate.
func (lc *LocalClient) DisconnectControl(ctx context.Context) error {
_, _, err := lc.sendWithHeaders(ctx, "POST", "/localapi/v0/disconnect-control", 200, nil, nil)
if err != nil {
return fmt.Errorf("error disconnecting control: %w", err)
}
return nil
}
// NetworkLockDisable shuts down network-lock across the tailnet.
func (lc *LocalClient) NetworkLockDisable(ctx context.Context, secret []byte) error {
if _, err := lc.send(ctx, "POST", "/localapi/v0/tka/disable", 200, bytes.NewReader(secret)); err != nil {

View File

@@ -51,6 +51,9 @@ type Client struct {
// HTTPClient optionally specifies an alternate HTTP client to use.
// If nil, http.DefaultClient is used.
HTTPClient *http.Client
// UserAgent optionally specifies an alternate User-Agent header
UserAgent string
}
func (c *Client) httpClient() *http.Client {
@@ -97,8 +100,9 @@ func (c *Client) setAuth(r *http.Request) {
// and can be changed manually by the user.
func NewClient(tailnet string, auth AuthMethod) *Client {
return &Client{
tailnet: tailnet,
auth: auth,
tailnet: tailnet,
auth: auth,
UserAgent: "tailscale-client-oss",
}
}
@@ -110,17 +114,16 @@ func (c *Client) Do(req *http.Request) (*http.Response, error) {
return nil, errors.New("use of Client without setting I_Acknowledge_This_API_Is_Unstable")
}
c.setAuth(req)
if c.UserAgent != "" {
req.Header.Set("User-Agent", c.UserAgent)
}
return c.httpClient().Do(req)
}
// sendRequest add the authentication key to the request and sends it. It
// receives the response and reads up to 10MB of it.
func (c *Client) sendRequest(req *http.Request) ([]byte, *http.Response, error) {
if !I_Acknowledge_This_API_Is_Unstable {
return nil, nil, errors.New("use of Client without setting I_Acknowledge_This_API_Is_Unstable")
}
c.setAuth(req)
resp, err := c.httpClient().Do(req)
resp, err := c.Do(req)
if err != nil {
return nil, resp, err
}

View File

@@ -17,7 +17,6 @@ import (
"os"
"path"
"path/filepath"
"slices"
"strings"
"sync"
"time"
@@ -27,6 +26,7 @@ import (
"tailscale.com/client/tailscale/apitype"
"tailscale.com/clientupdate"
"tailscale.com/envknob"
"tailscale.com/envknob/featureknob"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
@@ -35,6 +35,7 @@ import (
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/types/views"
"tailscale.com/util/httpm"
"tailscale.com/version"
"tailscale.com/version/distro"
@@ -113,11 +114,6 @@ const (
ManageServerMode ServerMode = "manage"
)
var (
exitNodeRouteV4 = netip.MustParsePrefix("0.0.0.0/0")
exitNodeRouteV6 = netip.MustParsePrefix("::/0")
)
// ServerOpts contains options for constructing a new Server.
type ServerOpts struct {
// Mode specifies the mode of web client being constructed.
@@ -808,8 +804,8 @@ type nodeData struct {
DeviceName string
TailnetName string // TLS cert name
DomainName string
IPv4 string
IPv6 string
IPv4 netip.Addr
IPv6 netip.Addr
OS string
IPNVersion string
@@ -868,10 +864,14 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
return
}
filterRules, _ := s.lc.DebugPacketFilterRules(r.Context())
ipv4, ipv6 := s.selfNodeAddresses(r, st)
data := &nodeData{
ID: st.Self.ID,
Status: st.BackendState,
DeviceName: strings.Split(st.Self.DNSName, ".")[0],
IPv4: ipv4,
IPv6: ipv6,
OS: st.Self.OS,
IPNVersion: strings.Split(st.Version, "-")[0],
Profile: st.User[st.Self.UserID],
@@ -891,10 +891,6 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
ACLAllowsAnyIncomingTraffic: s.aclsAllowAccess(filterRules),
}
ipv4, ipv6 := s.selfNodeAddresses(r, st)
data.IPv4 = ipv4.String()
data.IPv6 = ipv6.String()
if hostinfo.GetEnvType() == hostinfo.HomeAssistantAddOn && data.URLPrefix == "" {
// X-Ingress-Path is the path prefix in use for Home Assistant
// https://developers.home-assistant.io/docs/add-ons/presentation#ingress
@@ -927,10 +923,10 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
return p == route
})
}
data.AdvertisingExitNodeApproved = routeApproved(exitNodeRouteV4) || routeApproved(exitNodeRouteV6)
data.AdvertisingExitNodeApproved = routeApproved(tsaddr.AllIPv4()) || routeApproved(tsaddr.AllIPv6())
for _, r := range prefs.AdvertiseRoutes {
if r == exitNodeRouteV4 || r == exitNodeRouteV6 {
if tsaddr.IsExitRoute(r) {
data.AdvertisingExitNode = true
} else {
data.AdvertisedRoutes = append(data.AdvertisedRoutes, subnetRoute{
@@ -965,37 +961,16 @@ func (s *Server) serveGetNodeData(w http.ResponseWriter, r *http.Request) {
}
func availableFeatures() map[string]bool {
env := hostinfo.GetEnvType()
features := map[string]bool{
"advertise-exit-node": true, // available on all platforms
"advertise-routes": true, // available on all platforms
"use-exit-node": canUseExitNode(env) == nil,
"ssh": envknob.CanRunTailscaleSSH() == nil,
"use-exit-node": featureknob.CanUseExitNode() == nil,
"ssh": featureknob.CanRunTailscaleSSH() == nil,
"auto-update": version.IsUnstableBuild() && clientupdate.CanAutoUpdate(),
}
if env == hostinfo.HomeAssistantAddOn {
// Setting SSH on Home Assistant causes trouble on startup
// (since the flag is not being passed to `tailscale up`).
// Although Tailscale SSH does work here,
// it's not terribly useful since it's running in a separate container.
features["ssh"] = false
}
return features
}
func canUseExitNode(env hostinfo.EnvType) error {
switch dist := distro.Get(); dist {
case distro.Synology, // see https://github.com/tailscale/tailscale/issues/1995
distro.QNAP,
distro.Unraid:
return fmt.Errorf("Tailscale exit nodes cannot be used on %s.", dist)
}
if env == hostinfo.HomeAssistantAddOn {
return errors.New("Tailscale exit nodes cannot be used on Home Assistant.")
}
return nil
}
// aclsAllowAccess returns whether tailnet ACLs (as expressed in the provided filter rules)
// permit any devices to access the local web client.
// This does not currently check whether a specific device can connect, just any device.
@@ -1071,7 +1046,7 @@ func (s *Server) servePostRoutes(ctx context.Context, data postRoutesRequest) er
var currNonExitRoutes []string
var currAdvertisingExitNode bool
for _, r := range prefs.AdvertiseRoutes {
if r == exitNodeRouteV4 || r == exitNodeRouteV6 {
if tsaddr.IsExitRoute(r) {
currAdvertisingExitNode = true
continue
}
@@ -1092,12 +1067,7 @@ func (s *Server) servePostRoutes(ctx context.Context, data postRoutesRequest) er
return err
}
hasExitNodeRoute := func(all []netip.Prefix) bool {
return slices.Contains(all, exitNodeRouteV4) ||
slices.Contains(all, exitNodeRouteV6)
}
if !data.UseExitNode.IsZero() && hasExitNodeRoute(routes) {
if !data.UseExitNode.IsZero() && tsaddr.ContainsExitRoutes(views.SliceOf(routes)) {
return errors.New("cannot use and advertise exit node at same time")
}

View File

@@ -27,11 +27,8 @@ import (
"strconv"
"strings"
"github.com/google/uuid"
"tailscale.com/clientupdate/distsign"
"tailscale.com/types/logger"
"tailscale.com/util/cmpver"
"tailscale.com/util/winutil"
"tailscale.com/version"
"tailscale.com/version/distro"
)
@@ -756,164 +753,6 @@ func (up *Updater) updateMacAppStore() error {
return nil
}
const (
// winMSIEnv is the environment variable that, if set, is the MSI file for
// the update command to install. It's passed like this so we can stop the
// tailscale.exe process from running before the msiexec process runs and
// tries to overwrite ourselves.
winMSIEnv = "TS_UPDATE_WIN_MSI"
// winExePathEnv is the environment variable that is set along with
// winMSIEnv and carries the full path of the calling tailscale.exe binary.
// It is used to re-launch the GUI process (tailscale-ipn.exe) after
// install is complete.
winExePathEnv = "TS_UPDATE_WIN_EXE_PATH"
)
var (
verifyAuthenticode func(string) error // set non-nil only on Windows
markTempFileFunc func(string) error // set non-nil only on Windows
)
func (up *Updater) updateWindows() error {
if msi := os.Getenv(winMSIEnv); msi != "" {
// stdout/stderr from this part of the install could be lost since the
// parent tailscaled is replaced. Create a temp log file to have some
// output to debug with in case update fails.
close, err := up.switchOutputToFile()
if err != nil {
up.Logf("failed to create log file for installation: %v; proceeding with existing outputs", err)
} else {
defer close.Close()
}
up.Logf("installing %v ...", msi)
if err := up.installMSI(msi); err != nil {
up.Logf("MSI install failed: %v", err)
return err
}
up.Logf("success.")
return nil
}
if !winutil.IsCurrentProcessElevated() {
return errors.New(`update must be run as Administrator
you can run the command prompt as Administrator one of these ways:
* right-click cmd.exe, select 'Run as administrator'
* press Windows+x, then press a
* press Windows+r, type in "cmd", then press Ctrl+Shift+Enter`)
}
ver, err := requestedTailscaleVersion(up.Version, up.Track)
if err != nil {
return err
}
arch := runtime.GOARCH
if arch == "386" {
arch = "x86"
}
if !up.confirm(ver) {
return nil
}
tsDir := filepath.Join(os.Getenv("ProgramData"), "Tailscale")
msiDir := filepath.Join(tsDir, "MSICache")
if fi, err := os.Stat(tsDir); err != nil {
return fmt.Errorf("expected %s to exist, got stat error: %w", tsDir, err)
} else if !fi.IsDir() {
return fmt.Errorf("expected %s to be a directory; got %v", tsDir, fi.Mode())
}
if err := os.MkdirAll(msiDir, 0700); err != nil {
return err
}
up.cleanupOldDownloads(filepath.Join(msiDir, "*.msi"))
pkgsPath := fmt.Sprintf("%s/tailscale-setup-%s-%s.msi", up.Track, ver, arch)
msiTarget := filepath.Join(msiDir, path.Base(pkgsPath))
if err := up.downloadURLToFile(pkgsPath, msiTarget); err != nil {
return err
}
up.Logf("verifying MSI authenticode...")
if err := verifyAuthenticode(msiTarget); err != nil {
return fmt.Errorf("authenticode verification of %s failed: %w", msiTarget, err)
}
up.Logf("authenticode verification succeeded")
up.Logf("making tailscale.exe copy to switch to...")
up.cleanupOldDownloads(filepath.Join(os.TempDir(), "tailscale-updater-*.exe"))
selfOrig, selfCopy, err := makeSelfCopy()
if err != nil {
return err
}
defer os.Remove(selfCopy)
up.Logf("running tailscale.exe copy for final install...")
cmd := exec.Command(selfCopy, "update")
cmd.Env = append(os.Environ(), winMSIEnv+"="+msiTarget, winExePathEnv+"="+selfOrig)
cmd.Stdout = up.Stderr
cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin
if err := cmd.Start(); err != nil {
return err
}
// Once it's started, exit ourselves, so the binary is free
// to be replaced.
os.Exit(0)
panic("unreachable")
}
func (up *Updater) switchOutputToFile() (io.Closer, error) {
var logFilePath string
exePath, err := os.Executable()
if err != nil {
logFilePath = filepath.Join(os.TempDir(), "tailscale-updater.log")
} else {
logFilePath = strings.TrimSuffix(exePath, ".exe") + ".log"
}
up.Logf("writing update output to %q", logFilePath)
logFile, err := os.Create(logFilePath)
if err != nil {
return nil, err
}
up.Logf = func(m string, args ...any) {
fmt.Fprintf(logFile, m+"\n", args...)
}
up.Stdout = logFile
up.Stderr = logFile
return logFile, nil
}
func (up *Updater) installMSI(msi string) error {
var err error
for tries := 0; tries < 2; tries++ {
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/norestart", "/qn")
cmd.Dir = filepath.Dir(msi)
cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin
err = cmd.Run()
if err == nil {
break
}
up.Logf("Install attempt failed: %v", err)
uninstallVersion := up.currentVersion
if v := os.Getenv("TS_DEBUG_UNINSTALL_VERSION"); v != "" {
uninstallVersion = v
}
// Assume it's a downgrade, which msiexec won't permit. Uninstall our current version first.
up.Logf("Uninstalling current version %q for downgrade...", uninstallVersion)
cmd = exec.Command("msiexec.exe", "/x", msiUUIDForVersion(uninstallVersion), "/norestart", "/qn")
cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin
err = cmd.Run()
up.Logf("msiexec uninstall: %v", err)
}
return err
}
// cleanupOldDownloads removes all files matching glob (see filepath.Glob).
// Only regular files are removed, so the glob must match specific files and
// not directories.
@@ -938,53 +777,6 @@ func (up *Updater) cleanupOldDownloads(glob string) {
}
}
func msiUUIDForVersion(ver string) string {
arch := runtime.GOARCH
if arch == "386" {
arch = "x86"
}
track, err := versionToTrack(ver)
if err != nil {
track = UnstableTrack
}
msiURL := fmt.Sprintf("https://pkgs.tailscale.com/%s/tailscale-setup-%s-%s.msi", track, ver, arch)
return "{" + strings.ToUpper(uuid.NewSHA1(uuid.NameSpaceURL, []byte(msiURL)).String()) + "}"
}
func makeSelfCopy() (origPathExe, tmpPathExe string, err error) {
selfExe, err := os.Executable()
if err != nil {
return "", "", err
}
f, err := os.Open(selfExe)
if err != nil {
return "", "", err
}
defer f.Close()
f2, err := os.CreateTemp("", "tailscale-updater-*.exe")
if err != nil {
return "", "", err
}
if f := markTempFileFunc; f != nil {
if err := f(f2.Name()); err != nil {
return "", "", err
}
}
if _, err := io.Copy(f2, f); err != nil {
f2.Close()
return "", "", err
}
return selfExe, f2.Name(), f2.Close()
}
func (up *Updater) downloadURLToFile(pathSrc, fileDst string) (ret error) {
c, err := distsign.NewClient(up.Logf, up.PkgsAddr)
if err != nil {
return err
}
return c.Download(context.Background(), pathSrc, fileDst)
}
func (up *Updater) updateFreeBSD() (err error) {
if up.Version != "" {
return errors.New("installing a specific version on FreeBSD is not supported")

View File

@@ -0,0 +1,20 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build (linux && !android) || windows
package clientupdate
import (
"context"
"tailscale.com/clientupdate/distsign"
)
func (up *Updater) downloadURLToFile(pathSrc, fileDst string) (ret error) {
c, err := distsign.NewClient(up.Logf, up.PkgsAddr)
if err != nil {
return err
}
return c.Download(context.Background(), pathSrc, fileDst)
}

View File

@@ -0,0 +1,10 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !((linux && !android) || windows)
package clientupdate
func (up *Updater) downloadURLToFile(pathSrc, fileDst string) (ret error) {
panic("unreachable")
}

View File

@@ -0,0 +1,10 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !windows
package clientupdate
func (up *Updater) updateWindows() error {
panic("unreachable")
}

View File

@@ -7,13 +7,57 @@
package clientupdate
import (
"errors"
"fmt"
"io"
"os"
"os/exec"
"path"
"path/filepath"
"runtime"
"strings"
"github.com/google/uuid"
"golang.org/x/sys/windows"
"tailscale.com/util/winutil"
"tailscale.com/util/winutil/authenticode"
)
func init() {
markTempFileFunc = markTempFileWindows
verifyAuthenticode = verifyTailscale
const (
// winMSIEnv is the environment variable that, if set, is the MSI file for
// the update command to install. It's passed like this so we can stop the
// tailscale.exe process from running before the msiexec process runs and
// tries to overwrite ourselves.
winMSIEnv = "TS_UPDATE_WIN_MSI"
// winExePathEnv is the environment variable that is set along with
// winMSIEnv and carries the full path of the calling tailscale.exe binary.
// It is used to re-launch the GUI process (tailscale-ipn.exe) after
// install is complete.
winExePathEnv = "TS_UPDATE_WIN_EXE_PATH"
)
func makeSelfCopy() (origPathExe, tmpPathExe string, err error) {
selfExe, err := os.Executable()
if err != nil {
return "", "", err
}
f, err := os.Open(selfExe)
if err != nil {
return "", "", err
}
defer f.Close()
f2, err := os.CreateTemp("", "tailscale-updater-*.exe")
if err != nil {
return "", "", err
}
if err := markTempFileWindows(f2.Name()); err != nil {
return "", "", err
}
if _, err := io.Copy(f2, f); err != nil {
f2.Close()
return "", "", err
}
return selfExe, f2.Name(), f2.Close()
}
func markTempFileWindows(name string) error {
@@ -23,6 +67,159 @@ func markTempFileWindows(name string) error {
const certSubjectTailscale = "Tailscale Inc."
func verifyTailscale(path string) error {
func verifyAuthenticode(path string) error {
return authenticode.Verify(path, certSubjectTailscale)
}
func (up *Updater) updateWindows() error {
if msi := os.Getenv(winMSIEnv); msi != "" {
// stdout/stderr from this part of the install could be lost since the
// parent tailscaled is replaced. Create a temp log file to have some
// output to debug with in case update fails.
close, err := up.switchOutputToFile()
if err != nil {
up.Logf("failed to create log file for installation: %v; proceeding with existing outputs", err)
} else {
defer close.Close()
}
up.Logf("installing %v ...", msi)
if err := up.installMSI(msi); err != nil {
up.Logf("MSI install failed: %v", err)
return err
}
up.Logf("success.")
return nil
}
if !winutil.IsCurrentProcessElevated() {
return errors.New(`update must be run as Administrator
you can run the command prompt as Administrator one of these ways:
* right-click cmd.exe, select 'Run as administrator'
* press Windows+x, then press a
* press Windows+r, type in "cmd", then press Ctrl+Shift+Enter`)
}
ver, err := requestedTailscaleVersion(up.Version, up.Track)
if err != nil {
return err
}
arch := runtime.GOARCH
if arch == "386" {
arch = "x86"
}
if !up.confirm(ver) {
return nil
}
tsDir := filepath.Join(os.Getenv("ProgramData"), "Tailscale")
msiDir := filepath.Join(tsDir, "MSICache")
if fi, err := os.Stat(tsDir); err != nil {
return fmt.Errorf("expected %s to exist, got stat error: %w", tsDir, err)
} else if !fi.IsDir() {
return fmt.Errorf("expected %s to be a directory; got %v", tsDir, fi.Mode())
}
if err := os.MkdirAll(msiDir, 0700); err != nil {
return err
}
up.cleanupOldDownloads(filepath.Join(msiDir, "*.msi"))
pkgsPath := fmt.Sprintf("%s/tailscale-setup-%s-%s.msi", up.Track, ver, arch)
msiTarget := filepath.Join(msiDir, path.Base(pkgsPath))
if err := up.downloadURLToFile(pkgsPath, msiTarget); err != nil {
return err
}
up.Logf("verifying MSI authenticode...")
if err := verifyAuthenticode(msiTarget); err != nil {
return fmt.Errorf("authenticode verification of %s failed: %w", msiTarget, err)
}
up.Logf("authenticode verification succeeded")
up.Logf("making tailscale.exe copy to switch to...")
up.cleanupOldDownloads(filepath.Join(os.TempDir(), "tailscale-updater-*.exe"))
selfOrig, selfCopy, err := makeSelfCopy()
if err != nil {
return err
}
defer os.Remove(selfCopy)
up.Logf("running tailscale.exe copy for final install...")
cmd := exec.Command(selfCopy, "update")
cmd.Env = append(os.Environ(), winMSIEnv+"="+msiTarget, winExePathEnv+"="+selfOrig)
cmd.Stdout = up.Stderr
cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin
if err := cmd.Start(); err != nil {
return err
}
// Once it's started, exit ourselves, so the binary is free
// to be replaced.
os.Exit(0)
panic("unreachable")
}
func (up *Updater) installMSI(msi string) error {
var err error
for tries := 0; tries < 2; tries++ {
cmd := exec.Command("msiexec.exe", "/i", filepath.Base(msi), "/quiet", "/norestart", "/qn")
cmd.Dir = filepath.Dir(msi)
cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin
err = cmd.Run()
if err == nil {
break
}
up.Logf("Install attempt failed: %v", err)
uninstallVersion := up.currentVersion
if v := os.Getenv("TS_DEBUG_UNINSTALL_VERSION"); v != "" {
uninstallVersion = v
}
// Assume it's a downgrade, which msiexec won't permit. Uninstall our current version first.
up.Logf("Uninstalling current version %q for downgrade...", uninstallVersion)
cmd = exec.Command("msiexec.exe", "/x", msiUUIDForVersion(uninstallVersion), "/norestart", "/qn")
cmd.Stdout = up.Stdout
cmd.Stderr = up.Stderr
cmd.Stdin = os.Stdin
err = cmd.Run()
up.Logf("msiexec uninstall: %v", err)
}
return err
}
func msiUUIDForVersion(ver string) string {
arch := runtime.GOARCH
if arch == "386" {
arch = "x86"
}
track, err := versionToTrack(ver)
if err != nil {
track = UnstableTrack
}
msiURL := fmt.Sprintf("https://pkgs.tailscale.com/%s/tailscale-setup-%s-%s.msi", track, ver, arch)
return "{" + strings.ToUpper(uuid.NewSHA1(uuid.NameSpaceURL, []byte(msiURL)).String()) + "}"
}
func (up *Updater) switchOutputToFile() (io.Closer, error) {
var logFilePath string
exePath, err := os.Executable()
if err != nil {
logFilePath = filepath.Join(os.TempDir(), "tailscale-updater.log")
} else {
logFilePath = strings.TrimSuffix(exePath, ".exe") + ".log"
}
up.Logf("writing update output to %q", logFilePath)
logFile, err := os.Create(logFilePath)
if err != nil {
return nil, err
}
up.Logf = func(m string, args ...any) {
fmt.Fprintf(logFile, m+"\n", args...)
}
up.Stdout = logFile
up.Stderr = logFile
return logFile, nil
}

View File

@@ -18,12 +18,12 @@ var (
)
func usage() {
fmt.Fprintf(os.Stderr, `
fmt.Fprint(os.Stderr, `
usage: addlicense -file FILE <subcommand args...>
`[1:])
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, `
fmt.Fprint(os.Stderr, `
addlicense adds a Tailscale license to the beginning of file.
It is intended for use with 'go generate', so it also runs a subcommand,

View File

@@ -0,0 +1,131 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// checkmetrics validates that all metrics in the tailscale client-metrics
// are documented in a given path or URL.
package main
import (
"context"
"flag"
"fmt"
"io"
"log"
"net/http"
"net/http/httptest"
"os"
"strings"
"time"
"tailscale.com/ipn/store/mem"
"tailscale.com/tsnet"
"tailscale.com/tstest/integration/testcontrol"
"tailscale.com/util/httpm"
)
var (
kbPath = flag.String("kb-path", "", "filepath to the client-metrics knowledge base")
kbUrl = flag.String("kb-url", "", "URL to the client-metrics knowledge base page")
)
func main() {
flag.Parse()
if *kbPath == "" && *kbUrl == "" {
log.Fatalf("either -kb-path or -kb-url must be set")
}
var control testcontrol.Server
ts := httptest.NewServer(&control)
defer ts.Close()
td, err := os.MkdirTemp("", "testcontrol")
if err != nil {
log.Fatal(err)
}
defer os.RemoveAll(td)
// tsnet is used not used as a Tailscale client, but as a way to
// boot up Tailscale, have all the metrics registered, and then
// verifiy that all the metrics are documented.
tsn := &tsnet.Server{
Dir: td,
Store: new(mem.Store),
UserLogf: log.Printf,
Ephemeral: true,
ControlURL: ts.URL,
}
if err := tsn.Start(); err != nil {
log.Fatal(err)
}
defer tsn.Close()
log.Printf("checking that all metrics are documented, looking for: %s", tsn.Sys().UserMetricsRegistry().MetricNames())
if *kbPath != "" {
kb, err := readKB(*kbPath)
if err != nil {
log.Fatalf("reading kb: %v", err)
}
missing := undocumentedMetrics(kb, tsn.Sys().UserMetricsRegistry().MetricNames())
if len(missing) > 0 {
log.Fatalf("found undocumented metrics in %q: %v", *kbPath, missing)
}
}
if *kbUrl != "" {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
kb, err := getKB(ctx, *kbUrl)
if err != nil {
log.Fatalf("getting kb: %v", err)
}
missing := undocumentedMetrics(kb, tsn.Sys().UserMetricsRegistry().MetricNames())
if len(missing) > 0 {
log.Fatalf("found undocumented metrics in %q: %v", *kbUrl, missing)
}
}
}
func readKB(path string) (string, error) {
b, err := os.ReadFile(path)
if err != nil {
return "", fmt.Errorf("reading file: %w", err)
}
return string(b), nil
}
func getKB(ctx context.Context, url string) (string, error) {
req, err := http.NewRequestWithContext(ctx, httpm.GET, url, nil)
if err != nil {
return "", fmt.Errorf("creating request: %w", err)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", fmt.Errorf("getting kb page: %w", err)
}
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("unexpected status code: %d", resp.StatusCode)
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", fmt.Errorf("reading body: %w", err)
}
return string(b), nil
}
func undocumentedMetrics(b string, metrics []string) []string {
var missing []string
for _, metric := range metrics {
if !strings.Contains(b, metric) {
missing = append(missing, metric)
}
}
return missing
}

View File

@@ -117,7 +117,7 @@ func installEgressForwardingRule(_ context.Context, dstStr string, tsIPs []netip
if err := nfr.DNATNonTailscaleTraffic("tailscale0", dst); err != nil {
return fmt.Errorf("installing egress proxy rules: %w", err)
}
if err := nfr.AddSNATRuleForDst(local, dst); err != nil {
if err := nfr.EnsureSNATForDst(local, dst); err != nil {
return fmt.Errorf("installing egress proxy rules: %w", err)
}
if err := nfr.ClampMSSToPMTU("tailscale0", dst); err != nil {

View File

@@ -7,7 +7,6 @@ package main
import (
"log"
"net"
"net/http"
"sync"
)
@@ -23,29 +22,29 @@ type healthz struct {
func (h *healthz) ServeHTTP(w http.ResponseWriter, r *http.Request) {
h.Lock()
defer h.Unlock()
if h.hasAddrs {
w.Write([]byte("ok"))
} else {
http.Error(w, "node currently has no tailscale IPs", http.StatusInternalServerError)
http.Error(w, "node currently has no tailscale IPs", http.StatusServiceUnavailable)
}
}
// runHealthz runs a simple HTTP health endpoint on /healthz, listening on the
// provided address. A containerized tailscale instance is considered healthy if
func (h *healthz) update(healthy bool) {
h.Lock()
defer h.Unlock()
if h.hasAddrs != healthy {
log.Println("Setting healthy", healthy)
}
h.hasAddrs = healthy
}
// healthHandlers registers a simple health handler at /healthz.
// A containerized tailscale instance is considered healthy if
// it has at least one tailnet IP address.
func runHealthz(addr string, h *healthz) {
lis, err := net.Listen("tcp", addr)
if err != nil {
log.Fatalf("error listening on the provided health endpoint address %q: %v", addr, err)
}
mux := http.NewServeMux()
mux.Handle("/healthz", h)
log.Printf("Running healthcheck endpoint at %s/healthz", addr)
hs := &http.Server{Handler: mux}
go func() {
if err := hs.Serve(lis); err != nil {
log.Fatalf("failed running health endpoint: %v", err)
}
}()
func healthHandlers(mux *http.ServeMux) *healthz {
h := &healthz{}
mux.Handle("GET /healthz", h)
return h
}

View File

@@ -9,30 +9,56 @@ import (
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"net/netip"
"os"
"tailscale.com/kube/kubeapi"
"tailscale.com/kube/kubeclient"
"tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg"
)
// storeDeviceID writes deviceID to 'device_id' data field of the named
// Kubernetes Secret.
func storeDeviceID(ctx context.Context, secretName string, deviceID tailcfg.StableNodeID) error {
s := &kubeapi.Secret{
Data: map[string][]byte{
"device_id": []byte(deviceID),
},
}
return kc.StrategicMergePatchSecret(ctx, secretName, s, "tailscale-container")
// kubeClient is a wrapper around Tailscale's internal kube client that knows how to talk to the kube API server. We use
// this rather than any of the upstream Kubernetes client libaries to avoid extra imports.
type kubeClient struct {
kubeclient.Client
stateSecret string
canPatch bool // whether the client has permissions to patch Kubernetes Secrets
}
// storeDeviceEndpoints writes device's tailnet IPs and MagicDNS name to fields
// 'device_ips', 'device_fqdn' of the named Kubernetes Secret.
func storeDeviceEndpoints(ctx context.Context, secretName string, fqdn string, addresses []netip.Prefix) error {
func newKubeClient(root string, stateSecret string) (*kubeClient, error) {
if root != "/" {
// If we are running in a test, we need to set the root path to the fake
// service account directory.
kubeclient.SetRootPathForTesting(root)
}
var err error
kc, err := kubeclient.New("tailscale-container")
if err != nil {
return nil, fmt.Errorf("Error creating kube client: %w", err)
}
if (root != "/") || os.Getenv("TS_KUBERNETES_READ_API_SERVER_ADDRESS_FROM_ENV") == "true" {
// Derive the API server address from the environment variables
// Used to set http server in tests, or optionally enabled by flag
kc.SetURL(fmt.Sprintf("https://%s:%s", os.Getenv("KUBERNETES_SERVICE_HOST"), os.Getenv("KUBERNETES_SERVICE_PORT_HTTPS")))
}
return &kubeClient{Client: kc, stateSecret: stateSecret}, nil
}
// storeDeviceID writes deviceID to 'device_id' data field of the client's state Secret.
func (kc *kubeClient) storeDeviceID(ctx context.Context, deviceID tailcfg.StableNodeID) error {
s := &kubeapi.Secret{
Data: map[string][]byte{
kubetypes.KeyDeviceID: []byte(deviceID),
},
}
return kc.StrategicMergePatchSecret(ctx, kc.stateSecret, s, "tailscale-container")
}
// storeDeviceEndpoints writes device's tailnet IPs and MagicDNS name to fields 'device_ips', 'device_fqdn' of client's
// state Secret.
func (kc *kubeClient) storeDeviceEndpoints(ctx context.Context, fqdn string, addresses []netip.Prefix) error {
var ips []string
for _, addr := range addresses {
ips = append(ips, addr.Addr().String())
@@ -44,16 +70,28 @@ func storeDeviceEndpoints(ctx context.Context, secretName string, fqdn string, a
s := &kubeapi.Secret{
Data: map[string][]byte{
"device_fqdn": []byte(fqdn),
"device_ips": deviceIPs,
kubetypes.KeyDeviceFQDN: []byte(fqdn),
kubetypes.KeyDeviceIPs: deviceIPs,
},
}
return kc.StrategicMergePatchSecret(ctx, secretName, s, "tailscale-container")
return kc.StrategicMergePatchSecret(ctx, kc.stateSecret, s, "tailscale-container")
}
// storeHTTPSEndpoint writes an HTTPS endpoint exposed by this device via 'tailscale serve' to the client's state
// Secret. In practice this will be the same value that gets written to 'device_fqdn', but this should only be called
// when the serve config has been successfully set up.
func (kc *kubeClient) storeHTTPSEndpoint(ctx context.Context, ep string) error {
s := &kubeapi.Secret{
Data: map[string][]byte{
kubetypes.KeyHTTPSEndpoint: []byte(ep),
},
}
return kc.StrategicMergePatchSecret(ctx, kc.stateSecret, s, "tailscale-container")
}
// deleteAuthKey deletes the 'authkey' field of the given kube
// secret. No-op if there is no authkey in the secret.
func deleteAuthKey(ctx context.Context, secretName string) error {
func (kc *kubeClient) deleteAuthKey(ctx context.Context) error {
// m is a JSON Patch data structure, see https://jsonpatch.com/ or RFC 6902.
m := []kubeclient.JSONPatch{
{
@@ -61,7 +99,7 @@ func deleteAuthKey(ctx context.Context, secretName string) error {
Path: "/data/authkey",
},
}
if err := kc.JSONPatchSecret(ctx, secretName, m); err != nil {
if err := kc.JSONPatchResource(ctx, kc.stateSecret, kubeclient.TypeSecrets, m); err != nil {
if s, ok := err.(*kubeapi.Status); ok && s.Code == http.StatusUnprocessableEntity {
// This is kubernetes-ese for "the field you asked to
// delete already doesn't exist", aka no-op.
@@ -72,22 +110,19 @@ func deleteAuthKey(ctx context.Context, secretName string) error {
return nil
}
var kc kubeclient.Client
func initKubeClient(root string) {
if root != "/" {
// If we are running in a test, we need to set the root path to the fake
// service account directory.
kubeclient.SetRootPathForTesting(root)
// storeCapVerUID stores the current capability version of tailscale and, if provided, UID of the Pod in the tailscale
// state Secret.
// These two fields are used by the Kubernetes Operator to observe the current capability version of tailscaled running in this container.
func (kc *kubeClient) storeCapVerUID(ctx context.Context, podUID string) error {
capVerS := fmt.Sprintf("%d", tailcfg.CurrentCapabilityVersion)
d := map[string][]byte{
kubetypes.KeyCapVer: []byte(capVerS),
}
var err error
kc, err = kubeclient.New()
if err != nil {
log.Fatalf("Error creating kube client: %v", err)
if podUID != "" {
d[kubetypes.KeyPodUID] = []byte(podUID)
}
if (root != "/") || os.Getenv("TS_KUBERNETES_READ_API_SERVER_ADDRESS_FROM_ENV") == "true" {
// Derive the API server address from the environment variables
// Used to set http server in tests, or optionally enabled by flag
kc.SetURL(fmt.Sprintf("https://%s:%s", os.Getenv("KUBERNETES_SERVICE_HOST"), os.Getenv("KUBERNETES_SERVICE_PORT_HTTPS")))
s := &kubeapi.Secret{
Data: d,
}
return kc.StrategicMergePatchSecret(ctx, kc.stateSecret, s, "tailscale-container")
}

View File

@@ -21,7 +21,7 @@ func TestSetupKube(t *testing.T) {
cfg *settings
wantErr bool
wantCfg *settings
kc kubeclient.Client
kc *kubeClient
}{
{
name: "TS_AUTHKEY set, state Secret exists",
@@ -29,14 +29,14 @@ func TestSetupKube(t *testing.T) {
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return nil, nil
},
},
}},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
@@ -48,14 +48,14 @@ func TestSetupKube(t *testing.T) {
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return nil, &kubeapi.Status{Code: 404}
},
},
}},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
@@ -67,14 +67,14 @@ func TestSetupKube(t *testing.T) {
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return nil, &kubeapi.Status{Code: 404}
},
},
}},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
@@ -87,14 +87,14 @@ func TestSetupKube(t *testing.T) {
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return nil, &kubeapi.Status{Code: 403}
},
},
}},
wantCfg: &settings{
AuthKey: "foo",
KubeSecret: "foo",
@@ -111,11 +111,11 @@ func TestSetupKube(t *testing.T) {
AuthKey: "foo",
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, errors.New("broken")
},
},
}},
wantErr: true,
},
{
@@ -127,14 +127,14 @@ func TestSetupKube(t *testing.T) {
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, true, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return nil, &kubeapi.Status{Code: 404}
},
},
}},
},
{
// Interactive login using URL in Pod logs
@@ -145,28 +145,28 @@ func TestSetupKube(t *testing.T) {
wantCfg: &settings{
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return &kubeapi.Secret{}, nil
},
},
}},
},
{
name: "TS_AUTHKEY not set, state Secret contains auth key, we do not have RBAC to patch it",
cfg: &settings{
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return false, false, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return &kubeapi.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
}},
wantCfg: &settings{
KubeSecret: "foo",
},
@@ -177,14 +177,14 @@ func TestSetupKube(t *testing.T) {
cfg: &settings{
KubeSecret: "foo",
},
kc: &kubeclient.FakeClient{
kc: &kubeClient{stateSecret: "foo", Client: &kubeclient.FakeClient{
CheckSecretPermissionsImpl: func(context.Context, string) (bool, bool, error) {
return true, false, nil
},
GetSecretImpl: func(context.Context, string) (*kubeapi.Secret, error) {
return &kubeapi.Secret{Data: map[string][]byte{"authkey": []byte("foo")}}, nil
},
},
}},
wantCfg: &settings{
KubeSecret: "foo",
AuthKey: "foo",
@@ -194,9 +194,9 @@ func TestSetupKube(t *testing.T) {
}
for _, tt := range tests {
kc = tt.kc
kc := tt.kc
t.Run(tt.name, func(t *testing.T) {
if err := tt.cfg.setupKube(context.Background()); (err != nil) != tt.wantErr {
if err := tt.cfg.setupKube(context.Background(), kc); (err != nil) != tt.wantErr {
t.Errorf("settings.setupKube() error = %v, wantErr %v", err, tt.wantErr)
}
if diff := cmp.Diff(*tt.cfg, *tt.wantCfg); diff != "" {

View File

@@ -52,11 +52,17 @@
// ${TS_CERT_DOMAIN}, it will be replaced with the value of the available FQDN.
// It cannot be used in conjunction with TS_DEST_IP. The file is watched for changes,
// and will be re-applied when it changes.
// - TS_HEALTHCHECK_ADDR_PORT: if specified, an HTTP health endpoint will be
// served at /healthz at the provided address, which should be in form [<address>]:<port>.
// If not set, no health check will be run. If set to :<port>, addr will default to 0.0.0.0
// The health endpoint will return 200 OK if this node has at least one tailnet IP address,
// otherwise returns 503.
// - TS_HEALTHCHECK_ADDR_PORT: deprecated, use TS_ENABLE_HEALTH_CHECK instead and optionally
// set TS_LOCAL_ADDR_PORT. Will be removed in 1.82.0.
// - TS_LOCAL_ADDR_PORT: the address and port to serve local metrics and health
// check endpoints if enabled via TS_ENABLE_METRICS and/or TS_ENABLE_HEALTH_CHECK.
// Defaults to [::]:9002, serving on all available interfaces.
// - TS_ENABLE_METRICS: if true, a metrics endpoint will be served at /metrics on
// the address specified by TS_LOCAL_ADDR_PORT. See https://tailscale.com/kb/1482/client-metrics
// for more information on the metrics exposed.
// - TS_ENABLE_HEALTH_CHECK: if true, a health check endpoint will be served at /healthz on
// the address specified by TS_LOCAL_ADDR_PORT. The health endpoint will return 200
// OK if this node has at least one tailnet IP address, otherwise returns 503.
// NB: the health criteria might change in the future.
// - TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR: if specified, a path to a
// directory that containers tailscaled config in file. The config file needs to be
@@ -99,10 +105,10 @@ import (
"log"
"math"
"net"
"net/http"
"net/netip"
"os"
"os/signal"
"path"
"path/filepath"
"slices"
"strings"
@@ -115,6 +121,7 @@ import (
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
kubeutils "tailscale.com/k8s-operator"
"tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/types/ptr"
@@ -132,35 +139,9 @@ func newNetfilterRunner(logf logger.Logf) (linuxfw.NetfilterRunner, error) {
func main() {
log.SetPrefix("boot: ")
tailscale.I_Acknowledge_This_API_Is_Unstable = true
cfg := &settings{
AuthKey: defaultEnvs([]string{"TS_AUTHKEY", "TS_AUTH_KEY"}, ""),
Hostname: defaultEnv("TS_HOSTNAME", ""),
Routes: defaultEnvStringPointer("TS_ROUTES"),
ServeConfigPath: defaultEnv("TS_SERVE_CONFIG", ""),
ProxyTargetIP: defaultEnv("TS_DEST_IP", ""),
ProxyTargetDNSName: defaultEnv("TS_EXPERIMENTAL_DEST_DNS_NAME", ""),
TailnetTargetIP: defaultEnv("TS_TAILNET_TARGET_IP", ""),
TailnetTargetFQDN: defaultEnv("TS_TAILNET_TARGET_FQDN", ""),
DaemonExtraArgs: defaultEnv("TS_TAILSCALED_EXTRA_ARGS", ""),
ExtraArgs: defaultEnv("TS_EXTRA_ARGS", ""),
InKubernetes: os.Getenv("KUBERNETES_SERVICE_HOST") != "",
UserspaceMode: defaultBool("TS_USERSPACE", true),
StateDir: defaultEnv("TS_STATE_DIR", ""),
AcceptDNS: defaultEnvBoolPointer("TS_ACCEPT_DNS"),
KubeSecret: defaultEnv("TS_KUBE_SECRET", "tailscale"),
SOCKSProxyAddr: defaultEnv("TS_SOCKS5_SERVER", ""),
HTTPProxyAddr: defaultEnv("TS_OUTBOUND_HTTP_PROXY_LISTEN", ""),
Socket: defaultEnv("TS_SOCKET", "/tmp/tailscaled.sock"),
AuthOnce: defaultBool("TS_AUTH_ONCE", false),
Root: defaultEnv("TS_TEST_ONLY_ROOT", "/"),
TailscaledConfigFilePath: tailscaledConfigFilePath(),
AllowProxyingClusterTrafficViaIngress: defaultBool("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS", false),
PodIP: defaultEnv("POD_IP", ""),
EnableForwardingOptimizations: defaultBool("TS_EXPERIMENTAL_ENABLE_FORWARDING_OPTIMIZATIONS", false),
HealthCheckAddrPort: defaultEnv("TS_HEALTHCHECK_ADDR_PORT", ""),
}
if err := cfg.validate(); err != nil {
cfg, err := configFromEnv()
if err != nil {
log.Fatalf("invalid configuration: %v", err)
}
@@ -187,9 +168,13 @@ func main() {
bootCtx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
var kc *kubeClient
if cfg.InKubernetes {
initKubeClient(cfg.Root)
if err := cfg.setupKube(bootCtx); err != nil {
kc, err = newKubeClient(cfg.Root, cfg.KubeSecret)
if err != nil {
log.Fatalf("error initializing kube client: %v", err)
}
if err := cfg.setupKube(bootCtx, kc); err != nil {
log.Fatalf("error setting up for running on Kubernetes: %v", err)
}
}
@@ -205,6 +190,34 @@ func main() {
}
defer killTailscaled()
var healthCheck *healthz
if cfg.HealthCheckAddrPort != "" {
mux := http.NewServeMux()
log.Printf("Running healthcheck endpoint at %s/healthz", cfg.HealthCheckAddrPort)
healthCheck = healthHandlers(mux)
close := runHTTPServer(mux, cfg.HealthCheckAddrPort)
defer close()
}
if cfg.localMetricsEnabled() || cfg.localHealthEnabled() {
mux := http.NewServeMux()
if cfg.localMetricsEnabled() {
log.Printf("Running metrics endpoint at %s/metrics", cfg.LocalAddrPort)
metricsHandlers(mux, client, cfg.DebugAddrPort)
}
if cfg.localHealthEnabled() {
log.Printf("Running healthcheck endpoint at %s/healthz", cfg.LocalAddrPort)
healthCheck = healthHandlers(mux)
}
close := runHTTPServer(mux, cfg.LocalAddrPort)
defer close()
}
if cfg.EnableForwardingOptimizations {
if err := client.SetUDPGROForwarding(bootCtx); err != nil {
log.Printf("[unexpected] error enabling UDP GRO forwarding: %v", err)
@@ -275,10 +288,8 @@ authLoop:
switch *n.State {
case ipn.NeedsLogin:
if isOneStepConfig(cfg) {
// This could happen if this is the
// first time tailscaled was run for
// this device and the auth key was not
// passed via the configfile.
// This could happen if this is the first time tailscaled was run for this
// device and the auth key was not passed via the configfile.
log.Fatalf("invalid state: tailscaled daemon started with a config file, but tailscale is not logged in: ensure you pass a valid auth key in the config file.")
}
if err := authTailscale(); err != nil {
@@ -313,12 +324,18 @@ authLoop:
}
}
// Remove any serve config and advertised HTTPS endpoint that may have been set by a previous run of
// containerboot, but only if we're providing a new one.
if cfg.ServeConfigPath != "" {
// Remove any serve config that may have been set by a previous run of
// containerboot, but only if we're providing a new one.
log.Printf("serve proxy: unsetting previous config")
if err := client.SetServeConfig(ctx, new(ipn.ServeConfig)); err != nil {
log.Fatalf("failed to unset serve config: %v", err)
}
if hasKubeStateStore(cfg) {
if err := kc.storeHTTPSEndpoint(ctx, ""); err != nil {
log.Fatalf("failed to update HTTPS endpoint in tailscale state: %v", err)
}
}
}
if hasKubeStateStore(cfg) && isTwoStepConfigAuthOnce(cfg) {
@@ -326,11 +343,17 @@ authLoop:
// authkey is no longer needed. We don't strictly need to
// wipe it, but it's good hygiene.
log.Printf("Deleting authkey from kube secret")
if err := deleteAuthKey(ctx, cfg.KubeSecret); err != nil {
if err := kc.deleteAuthKey(ctx); err != nil {
log.Fatalf("deleting authkey from kube secret: %v", err)
}
}
if hasKubeStateStore(cfg) {
if err := kc.storeCapVerUID(ctx, cfg.PodUID); err != nil {
log.Fatalf("storing capability version and UID: %v", err)
}
}
w, err = client.WatchIPNBus(ctx, ipn.NotifyInitialNetMap|ipn.NotifyInitialState)
if err != nil {
log.Fatalf("rewatching tailscaled for updates after auth: %v", err)
@@ -350,12 +373,9 @@ authLoop:
certDomain = new(atomic.Pointer[string])
certDomainChanged = make(chan bool, 1)
h = &healthz{} // http server for the healthz endpoint
healthzRunner = sync.OnceFunc(func() { runHealthz(cfg.HealthCheckAddrPort, h) })
triggerWatchServeConfigChanges sync.Once
)
if cfg.ServeConfigPath != "" {
go watchServeConfigChanges(ctx, cfg.ServeConfigPath, certDomainChanged, certDomain, client)
}
var nfr linuxfw.NetfilterRunner
if isL3Proxy(cfg) {
nfr, err = newNetfilterRunner(log.Printf)
@@ -376,6 +396,9 @@ authLoop:
}
})
)
// egressSvcsErrorChan will get an error sent to it if this containerboot instance is configured to expose 1+
// egress services in HA mode and errored.
var egressSvcsErrorChan = make(chan error)
defer t.Stop()
// resetTimer resets timer for when to next attempt to resolve the DNS
// name for the proxy configured with TS_EXPERIMENTAL_DEST_DNS_NAME. The
@@ -401,6 +424,7 @@ authLoop:
failedResolveAttempts++
}
var egressSvcsNotify chan ipn.Notify
notifyChan := make(chan ipn.Notify)
errChan := make(chan error)
go func() {
@@ -452,7 +476,7 @@ runLoop:
// fails.
deviceID := n.NetMap.SelfNode.StableID()
if hasKubeStateStore(cfg) && deephash.Update(&currentDeviceID, &deviceID) {
if err := storeDeviceID(ctx, cfg.KubeSecret, n.NetMap.SelfNode.StableID()); err != nil {
if err := kc.storeDeviceID(ctx, n.NetMap.SelfNode.StableID()); err != nil {
log.Fatalf("storing device ID in Kubernetes Secret: %v", err)
}
}
@@ -478,7 +502,11 @@ runLoop:
egressAddrs = node.Addresses().AsSlice()
newCurentEgressIPs = deephash.Hash(&egressAddrs)
egressIPsHaveChanged = newCurentEgressIPs != currentEgressIPs
if egressIPsHaveChanged && len(egressAddrs) != 0 {
// The firewall rules get (re-)installed:
// - on startup
// - when the tailnet IPs of the tailnet target have changed
// - when the tailnet IPs of this node have changed
if (egressIPsHaveChanged || ipsHaveChanged) && len(egressAddrs) != 0 {
var rulesInstalled bool
for _, egressAddr := range egressAddrs {
ea := egressAddr.Addr()
@@ -521,8 +549,11 @@ runLoop:
resetTimer(false)
backendAddrs = newBackendAddrs
}
if cfg.ServeConfigPath != "" && len(n.NetMap.DNS.CertDomains) != 0 {
cd := n.NetMap.DNS.CertDomains[0]
if cfg.ServeConfigPath != "" {
cd := certDomainFromNetmap(n.NetMap)
if cd == "" {
cd = kubetypes.ValueNoHTTPS
}
prev := certDomain.Swap(ptr.To(cd))
if prev == nil || *prev != cd {
select {
@@ -564,42 +595,65 @@ runLoop:
// TODO (irbekrm): instead of using the IP and FQDN, have some other mechanism for the proxy signal that it is 'Ready'.
deviceEndpoints := []any{n.NetMap.SelfNode.Name(), n.NetMap.SelfNode.Addresses()}
if hasKubeStateStore(cfg) && deephash.Update(&currentDeviceEndpoints, &deviceEndpoints) {
if err := storeDeviceEndpoints(ctx, cfg.KubeSecret, n.NetMap.SelfNode.Name(), n.NetMap.SelfNode.Addresses().AsSlice()); err != nil {
if err := kc.storeDeviceEndpoints(ctx, n.NetMap.SelfNode.Name(), n.NetMap.SelfNode.Addresses().AsSlice()); err != nil {
log.Fatalf("storing device IPs and FQDN in Kubernetes Secret: %v", err)
}
}
if cfg.HealthCheckAddrPort != "" {
h.Lock()
h.hasAddrs = len(addrs) != 0
h.Unlock()
healthzRunner()
if healthCheck != nil {
healthCheck.update(len(addrs) != 0)
}
if cfg.ServeConfigPath != "" {
triggerWatchServeConfigChanges.Do(func() {
go watchServeConfigChanges(ctx, cfg.ServeConfigPath, certDomainChanged, certDomain, client, kc)
})
}
if egressSvcsNotify != nil {
egressSvcsNotify <- n
}
}
if !startupTasksDone {
// For containerboot instances that act as TCP
// proxies (proxying traffic to an endpoint
// passed via one of the env vars that
// containerbot reads) and store state in a
// Kubernetes Secret, we consider startup tasks
// done at the point when device info has been
// successfully stored to state Secret.
// For all other containerboot instances, if we
// just get to this point the startup tasks can
// be considered done.
// For containerboot instances that act as TCP proxies (proxying traffic to an endpoint
// passed via one of the env vars that containerboot reads) and store state in a
// Kubernetes Secret, we consider startup tasks done at the point when device info has
// been successfully stored to state Secret. For all other containerboot instances, if
// we just get to this point the startup tasks can be considered done.
if !isL3Proxy(cfg) || !hasKubeStateStore(cfg) || (currentDeviceEndpoints != deephash.Sum{} && currentDeviceID != deephash.Sum{}) {
// This log message is used in tests to detect when all
// post-auth configuration is done.
log.Println("Startup complete, waiting for shutdown signal")
startupTasksDone = true
// Wait on tailscaled process. It won't
// be cleaned up by default when the
// container exits as it is not PID1.
// TODO (irbekrm): perhaps we can
// replace the reaper by a running
// cmd.Wait in a goroutine immediately
// after starting tailscaled?
// Configure egress proxy. Egress proxy will set up firewall rules to proxy
// traffic to tailnet targets configured in the provided configuration file. It
// will then continuously monitor the config file and netmap updates and
// reconfigure the firewall rules as needed. If any of its operations fail, it
// will crash this node.
if cfg.EgressSvcsCfgPath != "" {
log.Printf("configuring egress proxy using configuration file at %s", cfg.EgressSvcsCfgPath)
egressSvcsNotify = make(chan ipn.Notify)
ep := egressProxy{
cfgPath: cfg.EgressSvcsCfgPath,
nfr: nfr,
kc: kc,
stateSecret: cfg.KubeSecret,
netmapChan: egressSvcsNotify,
podIPv4: cfg.PodIPv4,
tailnetAddrs: addrs,
}
go func() {
if err := ep.run(ctx, n); err != nil {
egressSvcsErrorChan <- err
}
}()
}
// Wait on tailscaled process. It won't be cleaned up by default when the
// container exits as it is not PID1. TODO (irbekrm): perhaps we can replace the
// reaper by a running cmd.Wait in a goroutine immediately after starting
// tailscaled?
reaper := func() {
defer wg.Done()
for {
@@ -637,6 +691,8 @@ runLoop:
}
backendAddrs = newBackendAddrs
resetTimer(false)
case e := <-egressSvcsErrorChan:
log.Fatalf("egress proxy failed: %v", e)
}
}
wg.Wait()
@@ -730,7 +786,6 @@ func tailscaledConfigFilePath() string {
}
cv, err := kubeutils.CapVerFromFileName(e.Name())
if err != nil {
log.Printf("skipping file %q in tailscaled config directory %q: %v", e.Name(), dir, err)
continue
}
if cv > maxCompatVer && cv <= tailcfg.CurrentCapabilityVersion {
@@ -738,8 +793,28 @@ func tailscaledConfigFilePath() string {
}
}
if maxCompatVer == -1 {
log.Fatalf("no tailscaled config file found in %q for current capability version %q", dir, tailcfg.CurrentCapabilityVersion)
log.Fatalf("no tailscaled config file found in %q for current capability version %d", dir, tailcfg.CurrentCapabilityVersion)
}
filePath := filepath.Join(dir, kubeutils.TailscaledConfigFileName(maxCompatVer))
log.Printf("Using tailscaled config file %q to match current capability version %d", filePath, tailcfg.CurrentCapabilityVersion)
return filePath
}
func runHTTPServer(mux *http.ServeMux, addr string) (close func() error) {
ln, err := net.Listen("tcp", addr)
if err != nil {
log.Fatalf("failed to listen on addr %q: %v", addr, err)
}
srv := &http.Server{Handler: mux}
go func() {
if err := srv.Serve(ln); err != nil {
log.Fatalf("failed running server: %v", err)
}
}()
return func() error {
err := srv.Shutdown(context.Background())
return errors.Join(err, ln.Close())
}
log.Printf("Using tailscaled config file %q for capability version %q", maxCompatVer, tailcfg.CurrentCapabilityVersion)
return path.Join(dir, kubeutils.TailscaledConfigFileNameForCap(maxCompatVer))
}

View File

@@ -31,6 +31,7 @@ import (
"github.com/google/go-cmp/cmp"
"golang.org/x/sys/unix"
"tailscale.com/ipn"
"tailscale.com/kube/egressservices"
"tailscale.com/tailcfg"
"tailscale.com/tstest"
"tailscale.com/types/netmap"
@@ -57,6 +58,16 @@ func TestContainerBoot(t *testing.T) {
if err != nil {
t.Fatalf("error unmarshaling tailscaled config: %v", err)
}
serveConf := ipn.ServeConfig{TCP: map[uint16]*ipn.TCPPortHandler{80: {HTTP: true}}}
serveConfBytes, err := json.Marshal(serveConf)
if err != nil {
t.Fatalf("error unmarshaling serve config: %v", err)
}
egressSvcsCfg := egressservices.Configs{"foo": {TailnetTarget: egressservices.TailnetTarget{FQDN: "foo.tailnetxyx.ts.net"}}}
egressSvcsCfgBytes, err := json.Marshal(egressSvcsCfg)
if err != nil {
t.Fatalf("error unmarshaling egress services config: %v", err)
}
dirs := []string{
"var/lib",
@@ -73,14 +84,16 @@ func TestContainerBoot(t *testing.T) {
}
}
files := map[string][]byte{
"usr/bin/tailscaled": fakeTailscaled,
"usr/bin/tailscale": fakeTailscale,
"usr/bin/iptables": fakeTailscale,
"usr/bin/ip6tables": fakeTailscale,
"dev/net/tun": []byte(""),
"proc/sys/net/ipv4/ip_forward": []byte("0"),
"proc/sys/net/ipv6/conf/all/forwarding": []byte("0"),
"etc/tailscaled/cap-95.hujson": tailscaledConfBytes,
"usr/bin/tailscaled": fakeTailscaled,
"usr/bin/tailscale": fakeTailscale,
"usr/bin/iptables": fakeTailscale,
"usr/bin/ip6tables": fakeTailscale,
"dev/net/tun": []byte(""),
"proc/sys/net/ipv4/ip_forward": []byte("0"),
"proc/sys/net/ipv6/conf/all/forwarding": []byte("0"),
"etc/tailscaled/cap-95.hujson": tailscaledConfBytes,
"etc/tailscaled/serve-config.json": serveConfBytes,
"etc/tailscaled/egress-services-config.json": egressSvcsCfgBytes,
}
resetFiles := func() {
for path, content := range files {
@@ -101,6 +114,26 @@ func TestContainerBoot(t *testing.T) {
argFile := filepath.Join(d, "args")
runningSockPath := filepath.Join(d, "tmp/tailscaled.sock")
var localAddrPort, healthAddrPort int
for _, p := range []*int{&localAddrPort, &healthAddrPort} {
ln, err := net.Listen("tcp", ":0")
if err != nil {
t.Fatalf("Failed to open listener: %v", err)
}
if err := ln.Close(); err != nil {
t.Fatalf("Failed to close listener: %v", err)
}
port := ln.Addr().(*net.TCPAddr).Port
*p = port
}
metricsURL := func(port int) string {
return fmt.Sprintf("http://127.0.0.1:%d/metrics", port)
}
healthURL := func(port int) string {
return fmt.Sprintf("http://127.0.0.1:%d/healthz", port)
}
capver := fmt.Sprintf("%d", tailcfg.CurrentCapabilityVersion)
type phase struct {
// If non-nil, send this IPN bus notification (and remember it as the
@@ -119,6 +152,8 @@ func TestContainerBoot(t *testing.T) {
// WantFatalLog is the fatal log message we expect from containerboot.
// If set for a phase, the test will finish on that phase.
WantFatalLog string
EndpointStatuses map[string]int
}
runningNotify := &ipn.Notify{
State: ptr.To(ipn.Running),
@@ -147,6 +182,11 @@ func TestContainerBoot(t *testing.T) {
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false",
},
// No metrics or health by default.
EndpointStatuses: map[string]int{
metricsURL(9002): -1,
healthURL(9002): -1,
},
},
{
Notify: runningNotify,
@@ -453,10 +493,11 @@ func TestContainerBoot(t *testing.T) {
{
Notify: runningNotify,
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"authkey": "tskey-key",
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"tailscale_capver": capver,
},
},
},
@@ -546,9 +587,10 @@ func TestContainerBoot(t *testing.T) {
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock set --accept-dns=false",
},
WantKubeSecret: map[string]string{
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"tailscale_capver": capver,
},
},
},
@@ -575,10 +617,11 @@ func TestContainerBoot(t *testing.T) {
{
Notify: runningNotify,
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"authkey": "tskey-key",
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"tailscale_capver": capver,
},
},
{
@@ -593,10 +636,11 @@ func TestContainerBoot(t *testing.T) {
},
},
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
"device_fqdn": "new-name.test.ts.net",
"device_id": "newID",
"device_ips": `["100.64.0.1"]`,
"authkey": "tskey-key",
"device_fqdn": "new-name.test.ts.net",
"device_id": "newID",
"device_ips": `["100.64.0.1"]`,
"tailscale_capver": capver,
},
},
},
@@ -700,6 +744,199 @@ func TestContainerBoot(t *testing.T) {
},
},
},
{
Name: "metrics_enabled",
Env: map[string]string{
"TS_LOCAL_ADDR_PORT": fmt.Sprintf("[::]:%d", localAddrPort),
"TS_ENABLE_METRICS": "true",
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false",
},
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): 200,
healthURL(localAddrPort): -1,
},
}, {
Notify: runningNotify,
},
},
},
{
Name: "health_enabled",
Env: map[string]string{
"TS_LOCAL_ADDR_PORT": fmt.Sprintf("[::]:%d", localAddrPort),
"TS_ENABLE_HEALTH_CHECK": "true",
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false",
},
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): -1,
healthURL(localAddrPort): 503, // Doesn't start passing until the next phase.
},
}, {
Notify: runningNotify,
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): -1,
healthURL(localAddrPort): 200,
},
},
},
},
{
Name: "metrics_and_health_on_same_port",
Env: map[string]string{
"TS_LOCAL_ADDR_PORT": fmt.Sprintf("[::]:%d", localAddrPort),
"TS_ENABLE_METRICS": "true",
"TS_ENABLE_HEALTH_CHECK": "true",
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false",
},
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): 200,
healthURL(localAddrPort): 503, // Doesn't start passing until the next phase.
},
}, {
Notify: runningNotify,
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): 200,
healthURL(localAddrPort): 200,
},
},
},
},
{
Name: "local_metrics_and_deprecated_health",
Env: map[string]string{
"TS_LOCAL_ADDR_PORT": fmt.Sprintf("[::]:%d", localAddrPort),
"TS_ENABLE_METRICS": "true",
"TS_HEALTHCHECK_ADDR_PORT": fmt.Sprintf("[::]:%d", healthAddrPort),
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false",
},
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): 200,
healthURL(healthAddrPort): 503, // Doesn't start passing until the next phase.
},
}, {
Notify: runningNotify,
EndpointStatuses: map[string]int{
metricsURL(localAddrPort): 200,
healthURL(healthAddrPort): 200,
},
},
},
},
{
Name: "serve_config_no_kube",
Env: map[string]string{
"TS_SERVE_CONFIG": filepath.Join(d, "etc/tailscaled/serve-config.json"),
"TS_AUTHKEY": "tskey-key",
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=mem: --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false --authkey=tskey-key",
},
},
{
Notify: runningNotify,
},
},
},
{
Name: "serve_config_kube",
Env: map[string]string{
"KUBERNETES_SERVICE_HOST": kube.Host,
"KUBERNETES_SERVICE_PORT_HTTPS": kube.Port,
"TS_SERVE_CONFIG": filepath.Join(d, "etc/tailscaled/serve-config.json"),
},
KubeSecret: map[string]string{
"authkey": "tskey-key",
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=kube:tailscale --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false --authkey=tskey-key",
},
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
},
},
{
Notify: runningNotify,
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"https_endpoint": "no-https",
"tailscale_capver": capver,
},
},
},
},
{
Name: "egress_svcs_config_kube",
Env: map[string]string{
"KUBERNETES_SERVICE_HOST": kube.Host,
"KUBERNETES_SERVICE_PORT_HTTPS": kube.Port,
"TS_EGRESS_SERVICES_CONFIG_PATH": filepath.Join(d, "etc/tailscaled/egress-services-config.json"),
},
KubeSecret: map[string]string{
"authkey": "tskey-key",
},
Phases: []phase{
{
WantCmds: []string{
"/usr/bin/tailscaled --socket=/tmp/tailscaled.sock --state=kube:tailscale --statedir=/tmp --tun=userspace-networking",
"/usr/bin/tailscale --socket=/tmp/tailscaled.sock up --accept-dns=false --authkey=tskey-key",
},
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
},
},
{
Notify: runningNotify,
WantKubeSecret: map[string]string{
"authkey": "tskey-key",
"device_fqdn": "test-node.test.ts.net",
"device_id": "myID",
"device_ips": `["100.64.0.1"]`,
"tailscale_capver": capver,
},
},
},
},
{
Name: "egress_svcs_config_no_kube",
Env: map[string]string{
"TS_EGRESS_SERVICES_CONFIG_PATH": filepath.Join(d, "etc/tailscaled/egress-services-config.json"),
"TS_AUTHKEY": "tskey-key",
},
Phases: []phase{
{
WantFatalLog: "TS_EGRESS_SERVICES_CONFIG_PATH is only supported for Tailscale running on Kubernetes",
},
},
},
}
for _, test := range tests {
@@ -796,7 +1033,26 @@ func TestContainerBoot(t *testing.T) {
return nil
})
if err != nil {
t.Fatal(err)
t.Fatalf("phase %d: %v", i, err)
}
for url, want := range p.EndpointStatuses {
err := tstest.WaitFor(2*time.Second, func() error {
resp, err := http.Get(url)
if err != nil && want != -1 {
return fmt.Errorf("GET %s: %v", url, err)
}
if want > 0 && resp.StatusCode != want {
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("GET %s, want %d, got %d\n%s", url, want, resp.StatusCode, string(body))
}
return nil
})
if err != nil {
t.Fatalf("phase %d: %v", i, err)
}
}
}
waitLogLine(t, 2*time.Second, cbOut, "Startup complete, waiting for shutdown signal")
@@ -955,6 +1211,12 @@ func (l *localAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.Method != "GET" {
panic(fmt.Sprintf("unsupported method %q", r.Method))
}
case "/localapi/v0/usermetrics":
if r.Method != "GET" {
panic(fmt.Sprintf("unsupported method %q", r.Method))
}
w.Write([]byte("fake metrics"))
return
default:
panic(fmt.Sprintf("unsupported path %q", r.URL.Path))
}

View File

@@ -0,0 +1,79 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"fmt"
"io"
"net/http"
"tailscale.com/client/tailscale"
"tailscale.com/client/tailscale/apitype"
)
// metrics is a simple metrics HTTP server, if enabled it forwards requests to
// the tailscaled's LocalAPI usermetrics endpoint at /localapi/v0/usermetrics.
type metrics struct {
debugEndpoint string
lc *tailscale.LocalClient
}
func proxy(w http.ResponseWriter, r *http.Request, url string, do func(*http.Request) (*http.Response, error)) {
req, err := http.NewRequestWithContext(r.Context(), r.Method, url, r.Body)
if err != nil {
http.Error(w, fmt.Sprintf("failed to construct request: %s", err), http.StatusInternalServerError)
return
}
req.Header = r.Header.Clone()
resp, err := do(req)
if err != nil {
http.Error(w, fmt.Sprintf("failed to proxy request: %s", err), http.StatusInternalServerError)
return
}
defer resp.Body.Close()
for key, val := range resp.Header {
for _, v := range val {
w.Header().Add(key, v)
}
}
w.WriteHeader(resp.StatusCode)
if _, err := io.Copy(w, resp.Body); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
func (m *metrics) handleMetrics(w http.ResponseWriter, r *http.Request) {
localAPIURL := "http://" + apitype.LocalAPIHost + "/localapi/v0/usermetrics"
proxy(w, r, localAPIURL, m.lc.DoLocalRequest)
}
func (m *metrics) handleDebug(w http.ResponseWriter, r *http.Request) {
if m.debugEndpoint == "" {
http.Error(w, "debug endpoint not configured", http.StatusNotFound)
return
}
debugURL := "http://" + m.debugEndpoint + r.URL.Path
proxy(w, r, debugURL, http.DefaultClient.Do)
}
// metricsHandlers registers a simple HTTP metrics handler at /metrics, forwarding
// requests to tailscaled's /localapi/v0/usermetrics API.
//
// In 1.78.x and 1.80.x, it also proxies debug paths to tailscaled's debug
// endpoint if configured to ease migration for a breaking change serving user
// metrics instead of debug metrics on the "metrics" port.
func metricsHandlers(mux *http.ServeMux, lc *tailscale.LocalClient, debugAddrPort string) {
m := &metrics{
lc: lc,
debugEndpoint: debugAddrPort,
}
mux.HandleFunc("GET /metrics", m.handleMetrics)
mux.HandleFunc("/debug/", m.handleDebug) // TODO(tomhjp): Remove for 1.82.0 release.
}

View File

@@ -19,6 +19,8 @@ import (
"github.com/fsnotify/fsnotify"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
"tailscale.com/kube/kubetypes"
"tailscale.com/types/netmap"
)
// watchServeConfigChanges watches path for changes, and when it sees one, reads
@@ -26,21 +28,21 @@ import (
// applies it to lc. It exits when ctx is canceled. cdChanged is a channel that
// is written to when the certDomain changes, causing the serve config to be
// re-read and applied.
func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan bool, certDomainAtomic *atomic.Pointer[string], lc *tailscale.LocalClient) {
func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan bool, certDomainAtomic *atomic.Pointer[string], lc *tailscale.LocalClient, kc *kubeClient) {
if certDomainAtomic == nil {
panic("cd must not be nil")
panic("certDomainAtomic must not be nil")
}
var tickChan <-chan time.Time
var eventChan <-chan fsnotify.Event
if w, err := fsnotify.NewWatcher(); err != nil {
log.Printf("failed to create fsnotify watcher, timer-only mode: %v", err)
log.Printf("serve proxy: failed to create fsnotify watcher, timer-only mode: %v", err)
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
tickChan = ticker.C
} else {
defer w.Close()
if err := w.Add(filepath.Dir(path)); err != nil {
log.Fatalf("failed to add fsnotify watch: %v", err)
log.Fatalf("serve proxy: failed to add fsnotify watch: %v", err)
}
eventChan = w.Events
}
@@ -59,24 +61,62 @@ func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan
// k8s handles these mounts. So just re-read the file and apply it
// if it's changed.
}
if certDomain == "" {
continue
}
sc, err := readServeConfig(path, certDomain)
if err != nil {
log.Fatalf("failed to read serve config: %v", err)
log.Fatalf("serve proxy: failed to read serve config: %v", err)
}
if prevServeConfig != nil && reflect.DeepEqual(sc, prevServeConfig) {
continue
}
log.Printf("Applying serve config")
if err := lc.SetServeConfig(ctx, sc); err != nil {
log.Fatalf("failed to set serve config: %v", err)
validateHTTPSServe(certDomain, sc)
if err := updateServeConfig(ctx, sc, certDomain, lc); err != nil {
log.Fatalf("serve proxy: error updating serve config: %v", err)
}
if kc != nil && kc.canPatch {
if err := kc.storeHTTPSEndpoint(ctx, certDomain); err != nil {
log.Fatalf("serve proxy: error storing HTTPS endpoint: %v", err)
}
}
prevServeConfig = sc
}
}
func certDomainFromNetmap(nm *netmap.NetworkMap) string {
if len(nm.DNS.CertDomains) == 0 {
return ""
}
return nm.DNS.CertDomains[0]
}
func updateServeConfig(ctx context.Context, sc *ipn.ServeConfig, certDomain string, lc *tailscale.LocalClient) error {
// TODO(irbekrm): This means that serve config that does not expose HTTPS endpoint will not be set for a tailnet
// that does not have HTTPS enabled. We probably want to fix this.
if certDomain == kubetypes.ValueNoHTTPS {
return nil
}
log.Printf("serve proxy: applying serve config")
return lc.SetServeConfig(ctx, sc)
}
func validateHTTPSServe(certDomain string, sc *ipn.ServeConfig) {
if certDomain != kubetypes.ValueNoHTTPS || !hasHTTPSEndpoint(sc) {
return
}
log.Printf(
`serve proxy: this node is configured as a proxy that exposes an HTTPS endpoint to tailnet,
(perhaps a Kubernetes operator Ingress proxy) but it is not able to issue TLS certs, so this will likely not work.
To make it work, ensure that HTTPS is enabled for your tailnet, see https://tailscale.com/kb/1153/enabling-https for more details.`)
}
func hasHTTPSEndpoint(cfg *ipn.ServeConfig) bool {
for _, tcpCfg := range cfg.TCP {
if tcpCfg.HTTPS {
return true
}
}
return false
}
// readServeConfig reads the ipn.ServeConfig from path, replacing
// ${TS_CERT_DOMAIN} with certDomain.
func readServeConfig(path, certDomain string) (*ipn.ServeConfig, error) {

View File

@@ -0,0 +1,571 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"net/netip"
"os"
"path/filepath"
"reflect"
"strings"
"time"
"github.com/fsnotify/fsnotify"
"tailscale.com/ipn"
"tailscale.com/kube/egressservices"
"tailscale.com/kube/kubeclient"
"tailscale.com/tailcfg"
"tailscale.com/util/linuxfw"
"tailscale.com/util/mak"
)
const tailscaleTunInterface = "tailscale0"
// This file contains functionality to run containerboot as a proxy that can
// route cluster traffic to one or more tailnet targets, based on portmapping
// rules read from a configfile. Currently (9/2024) this is only used for the
// Kubernetes operator egress proxies.
// egressProxy knows how to configure firewall rules to route cluster traffic to
// one or more tailnet services.
type egressProxy struct {
cfgPath string // path to egress service config file
nfr linuxfw.NetfilterRunner // never nil
kc kubeclient.Client // never nil
stateSecret string // name of the kube state Secret
netmapChan chan ipn.Notify // chan to receive netmap updates on
podIPv4 string // never empty string, currently only IPv4 is supported
// tailnetFQDNs is the egress service FQDN to tailnet IP mappings that
// were last used to configure firewall rules for this proxy.
// TODO(irbekrm): target addresses are also stored in the state Secret.
// Evaluate whether we should retrieve them from there and not store in
// memory at all.
targetFQDNs map[string][]netip.Prefix
// used to configure firewall rules.
tailnetAddrs []netip.Prefix
}
// run configures egress proxy firewall rules and ensures that the firewall rules are reconfigured when:
// - the mounted egress config has changed
// - the proxy's tailnet IP addresses have changed
// - tailnet IPs have changed for any backend targets specified by tailnet FQDN
func (ep *egressProxy) run(ctx context.Context, n ipn.Notify) error {
var tickChan <-chan time.Time
var eventChan <-chan fsnotify.Event
// TODO (irbekrm): take a look if this can be pulled into a single func
// shared with serve config loader.
if w, err := fsnotify.NewWatcher(); err != nil {
log.Printf("failed to create fsnotify watcher, timer-only mode: %v", err)
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
tickChan = ticker.C
} else {
defer w.Close()
if err := w.Add(filepath.Dir(ep.cfgPath)); err != nil {
return fmt.Errorf("failed to add fsnotify watch: %w", err)
}
eventChan = w.Events
}
if err := ep.sync(ctx, n); err != nil {
return err
}
for {
var err error
select {
case <-ctx.Done():
return nil
case <-tickChan:
err = ep.sync(ctx, n)
case <-eventChan:
log.Printf("config file change detected, ensuring firewall config is up to date...")
err = ep.sync(ctx, n)
case n = <-ep.netmapChan:
shouldResync := ep.shouldResync(n)
if shouldResync {
log.Printf("netmap change detected, ensuring firewall config is up to date...")
err = ep.sync(ctx, n)
}
}
if err != nil {
return fmt.Errorf("error syncing egress service config: %w", err)
}
}
}
// sync triggers an egress proxy config resync. The resync calculates the diff between config and status to determine if
// any firewall rules need to be updated. Currently using status in state Secret as a reference for what is the current
// firewall configuration is good enough because - the status is keyed by the Pod IP - we crash the Pod on errors such
// as failed firewall update
func (ep *egressProxy) sync(ctx context.Context, n ipn.Notify) error {
cfgs, err := ep.getConfigs()
if err != nil {
return fmt.Errorf("error retrieving egress service configs: %w", err)
}
status, err := ep.getStatus(ctx)
if err != nil {
return fmt.Errorf("error retrieving current egress proxy status: %w", err)
}
newStatus, err := ep.syncEgressConfigs(cfgs, status, n)
if err != nil {
return fmt.Errorf("error syncing egress service configs: %w", err)
}
if !servicesStatusIsEqual(newStatus, status) {
if err := ep.setStatus(ctx, newStatus, n); err != nil {
return fmt.Errorf("error setting egress proxy status: %w", err)
}
}
return nil
}
// addrsHaveChanged returns true if the provided netmap update contains tailnet address change for this proxy node.
// Netmap must not be nil.
func (ep *egressProxy) addrsHaveChanged(n ipn.Notify) bool {
return !reflect.DeepEqual(ep.tailnetAddrs, n.NetMap.SelfNode.Addresses())
}
// syncEgressConfigs adds and deletes firewall rules to match the desired
// configuration. It uses the provided status to determine what is currently
// applied and updates the status after a successful sync.
func (ep *egressProxy) syncEgressConfigs(cfgs *egressservices.Configs, status *egressservices.Status, n ipn.Notify) (*egressservices.Status, error) {
if !(wantsServicesConfigured(cfgs) || hasServicesConfigured(status)) {
return nil, nil
}
// Delete unnecessary services.
if err := ep.deleteUnnecessaryServices(cfgs, status); err != nil {
return nil, fmt.Errorf("error deleting services: %w", err)
}
newStatus := &egressservices.Status{}
if !wantsServicesConfigured(cfgs) {
return newStatus, nil
}
// Add new services, update rules for any that have changed.
rulesPerSvcToAdd := make(map[string][]rule, 0)
rulesPerSvcToDelete := make(map[string][]rule, 0)
for svcName, cfg := range *cfgs {
tailnetTargetIPs, err := ep.tailnetTargetIPsForSvc(cfg, n)
if err != nil {
return nil, fmt.Errorf("error determining tailnet target IPs: %w", err)
}
rulesToAdd, rulesToDelete, err := updatesForCfg(svcName, cfg, status, tailnetTargetIPs)
if err != nil {
return nil, fmt.Errorf("error validating service changes: %v", err)
}
log.Printf("syncegressservices: looking at svc %s rulesToAdd %d rulesToDelete %d", svcName, len(rulesToAdd), len(rulesToDelete))
if len(rulesToAdd) != 0 {
mak.Set(&rulesPerSvcToAdd, svcName, rulesToAdd)
}
if len(rulesToDelete) != 0 {
mak.Set(&rulesPerSvcToDelete, svcName, rulesToDelete)
}
if len(rulesToAdd) != 0 || ep.addrsHaveChanged(n) {
// For each tailnet target, set up SNAT from the local tailnet device address of the matching
// family.
for _, t := range tailnetTargetIPs {
var local netip.Addr
for _, pfx := range n.NetMap.SelfNode.Addresses().All() {
if !pfx.IsSingleIP() {
continue
}
if pfx.Addr().Is4() != t.Is4() {
continue
}
local = pfx.Addr()
break
}
if !local.IsValid() {
return nil, fmt.Errorf("no valid local IP: %v", local)
}
if err := ep.nfr.EnsureSNATForDst(local, t); err != nil {
return nil, fmt.Errorf("error setting up SNAT rule: %w", err)
}
}
}
// Update the status. Status will be written back to the state Secret by the caller.
mak.Set(&newStatus.Services, svcName, &egressservices.ServiceStatus{TailnetTargetIPs: tailnetTargetIPs, TailnetTarget: cfg.TailnetTarget, Ports: cfg.Ports})
}
// Actually apply the firewall rules.
if err := ensureRulesAdded(rulesPerSvcToAdd, ep.nfr); err != nil {
return nil, fmt.Errorf("error adding rules: %w", err)
}
if err := ensureRulesDeleted(rulesPerSvcToDelete, ep.nfr); err != nil {
return nil, fmt.Errorf("error deleting rules: %w", err)
}
return newStatus, nil
}
// updatesForCfg calculates any rules that need to be added or deleted for an individucal egress service config.
func updatesForCfg(svcName string, cfg egressservices.Config, status *egressservices.Status, tailnetTargetIPs []netip.Addr) ([]rule, []rule, error) {
rulesToAdd := make([]rule, 0)
rulesToDelete := make([]rule, 0)
currentConfig, ok := lookupCurrentConfig(svcName, status)
// If no rules for service are present yet, add them all.
if !ok {
for _, t := range tailnetTargetIPs {
for ports := range cfg.Ports {
log.Printf("syncegressservices: svc %s adding port %v", svcName, ports)
rulesToAdd = append(rulesToAdd, rule{tailnetPort: ports.TargetPort, containerPort: ports.MatchPort, protocol: ports.Protocol, tailnetIP: t})
}
}
return rulesToAdd, rulesToDelete, nil
}
// If there are no backend targets available, delete any currently configured rules.
if len(tailnetTargetIPs) == 0 {
log.Printf("tailnet target for egress service %s does not have any backend addresses, deleting all rules", svcName)
for _, ip := range currentConfig.TailnetTargetIPs {
for ports := range currentConfig.Ports {
rulesToDelete = append(rulesToAdd, rule{tailnetPort: ports.TargetPort, containerPort: ports.MatchPort, protocol: ports.Protocol, tailnetIP: ip})
}
}
return rulesToAdd, rulesToDelete, nil
}
// If there are rules present for backend targets that no longer match, delete them.
for _, ip := range currentConfig.TailnetTargetIPs {
var found bool
for _, wantsIP := range tailnetTargetIPs {
if reflect.DeepEqual(ip, wantsIP) {
found = true
break
}
}
if !found {
for ports := range currentConfig.Ports {
rulesToDelete = append(rulesToDelete, rule{tailnetPort: ports.TargetPort, containerPort: ports.MatchPort, protocol: ports.Protocol, tailnetIP: ip})
}
}
}
// Sync rules for the currently wanted backend targets.
for _, ip := range tailnetTargetIPs {
// If the backend target is not yet present in status, add all rules.
var found bool
for _, gotIP := range currentConfig.TailnetTargetIPs {
if reflect.DeepEqual(ip, gotIP) {
found = true
break
}
}
if !found {
for ports := range cfg.Ports {
rulesToAdd = append(rulesToAdd, rule{tailnetPort: ports.TargetPort, containerPort: ports.MatchPort, protocol: ports.Protocol, tailnetIP: ip})
}
continue
}
// If the backend target is present in status, check that the
// currently applied rules are up to date.
// Delete any current portmappings that are no longer present in config.
for port := range currentConfig.Ports {
if _, ok := cfg.Ports[port]; ok {
continue
}
rulesToDelete = append(rulesToDelete, rule{tailnetPort: port.TargetPort, containerPort: port.MatchPort, protocol: port.Protocol, tailnetIP: ip})
}
// Add any new portmappings.
for port := range cfg.Ports {
if _, ok := currentConfig.Ports[port]; ok {
continue
}
rulesToAdd = append(rulesToAdd, rule{tailnetPort: port.TargetPort, containerPort: port.MatchPort, protocol: port.Protocol, tailnetIP: ip})
}
}
return rulesToAdd, rulesToDelete, nil
}
// deleteUnneccessaryServices ensure that any services found on status, but not
// present in config are deleted.
func (ep *egressProxy) deleteUnnecessaryServices(cfgs *egressservices.Configs, status *egressservices.Status) error {
if !hasServicesConfigured(status) {
return nil
}
if !wantsServicesConfigured(cfgs) {
for svcName, svc := range status.Services {
log.Printf("service %s is no longer required, deleting", svcName)
if err := ensureServiceDeleted(svcName, svc, ep.nfr); err != nil {
return fmt.Errorf("error deleting service %s: %w", svcName, err)
}
}
return nil
}
for svcName, svc := range status.Services {
if _, ok := (*cfgs)[svcName]; !ok {
log.Printf("service %s is no longer required, deleting", svcName)
if err := ensureServiceDeleted(svcName, svc, ep.nfr); err != nil {
return fmt.Errorf("error deleting service %s: %w", svcName, err)
}
// TODO (irbekrm): also delete the SNAT rule here
}
}
return nil
}
// getConfigs gets the mounted egress service configuration.
func (ep *egressProxy) getConfigs() (*egressservices.Configs, error) {
j, err := os.ReadFile(ep.cfgPath)
if os.IsNotExist(err) {
return nil, nil
}
if err != nil {
return nil, err
}
if len(j) == 0 || string(j) == "" {
return nil, nil
}
cfg := &egressservices.Configs{}
if err := json.Unmarshal(j, &cfg); err != nil {
return nil, err
}
return cfg, nil
}
// getStatus gets the current status of the configured firewall. The current
// status is stored in state Secret. Returns nil status if no status that
// applies to the current proxy Pod was found. Uses the Pod IP to determine if a
// status found in the state Secret applies to this proxy Pod.
func (ep *egressProxy) getStatus(ctx context.Context) (*egressservices.Status, error) {
secret, err := ep.kc.GetSecret(ctx, ep.stateSecret)
if err != nil {
return nil, fmt.Errorf("error retrieving state secret: %w", err)
}
status := &egressservices.Status{}
raw, ok := secret.Data[egressservices.KeyEgressServices]
if !ok {
return nil, nil
}
if err := json.Unmarshal([]byte(raw), status); err != nil {
return nil, fmt.Errorf("error unmarshalling previous config: %w", err)
}
if reflect.DeepEqual(status.PodIPv4, ep.podIPv4) {
return status, nil
}
return nil, nil
}
// setStatus writes egress proxy's currently configured firewall to the state
// Secret and updates proxy's tailnet addresses.
func (ep *egressProxy) setStatus(ctx context.Context, status *egressservices.Status, n ipn.Notify) error {
// Pod IP is used to determine if a stored status applies to THIS proxy Pod.
if status == nil {
status = &egressservices.Status{}
}
status.PodIPv4 = ep.podIPv4
secret, err := ep.kc.GetSecret(ctx, ep.stateSecret)
if err != nil {
return fmt.Errorf("error retrieving state Secret: %w", err)
}
bs, err := json.Marshal(status)
if err != nil {
return fmt.Errorf("error marshalling service config: %w", err)
}
secret.Data[egressservices.KeyEgressServices] = bs
patch := kubeclient.JSONPatch{
Op: "replace",
Path: fmt.Sprintf("/data/%s", egressservices.KeyEgressServices),
Value: bs,
}
if err := ep.kc.JSONPatchResource(ctx, ep.stateSecret, kubeclient.TypeSecrets, []kubeclient.JSONPatch{patch}); err != nil {
return fmt.Errorf("error patching state Secret: %w", err)
}
ep.tailnetAddrs = n.NetMap.SelfNode.Addresses().AsSlice()
return nil
}
// tailnetTargetIPsForSvc returns the tailnet IPs to which traffic for this
// egress service should be proxied. The egress service can be configured by IP
// or by FQDN. If it's configured by IP, just return that. If it's configured by
// FQDN, resolve the FQDN and return the resolved IPs. It checks if the
// netfilter runner supports IPv6 NAT and skips any IPv6 addresses if it
// doesn't.
func (ep *egressProxy) tailnetTargetIPsForSvc(svc egressservices.Config, n ipn.Notify) (addrs []netip.Addr, err error) {
if svc.TailnetTarget.IP != "" {
addr, err := netip.ParseAddr(svc.TailnetTarget.IP)
if err != nil {
return nil, fmt.Errorf("error parsing tailnet target IP: %w", err)
}
if addr.Is6() && !ep.nfr.HasIPV6NAT() {
log.Printf("tailnet target is an IPv6 address, but this host does not support IPv6 in the chosen firewall mode. This will probably not work.")
return addrs, nil
}
return []netip.Addr{addr}, nil
}
if svc.TailnetTarget.FQDN == "" {
return nil, errors.New("unexpected egress service config- neither tailnet target IP nor FQDN is set")
}
if n.NetMap == nil {
log.Printf("netmap is not available, unable to determine backend addresses for %s", svc.TailnetTarget.FQDN)
return addrs, nil
}
var (
node tailcfg.NodeView
nodeFound bool
)
for _, nn := range n.NetMap.Peers {
if equalFQDNs(nn.Name(), svc.TailnetTarget.FQDN) {
node = nn
nodeFound = true
break
}
}
if nodeFound {
for _, addr := range node.Addresses().AsSlice() {
if addr.Addr().Is6() && !ep.nfr.HasIPV6NAT() {
log.Printf("tailnet target %v is an IPv6 address, but this host does not support IPv6 in the chosen firewall mode, skipping.", addr.Addr().String())
continue
}
addrs = append(addrs, addr.Addr())
}
// Egress target endpoints configured via FQDN are stored, so
// that we can determine if a netmap update should trigger a
// resync.
mak.Set(&ep.targetFQDNs, svc.TailnetTarget.FQDN, node.Addresses().AsSlice())
}
return addrs, nil
}
// shouldResync parses netmap update and returns true if the update contains
// changes for which the egress proxy's firewall should be reconfigured.
func (ep *egressProxy) shouldResync(n ipn.Notify) bool {
if n.NetMap == nil {
return false
}
// If proxy's tailnet addresses have changed, resync.
if !reflect.DeepEqual(n.NetMap.SelfNode.Addresses().AsSlice(), ep.tailnetAddrs) {
log.Printf("node addresses have changed, trigger egress config resync")
ep.tailnetAddrs = n.NetMap.SelfNode.Addresses().AsSlice()
return true
}
// If the IPs for any of the egress services configured via FQDN have
// changed, resync.
for fqdn, ips := range ep.targetFQDNs {
for _, nn := range n.NetMap.Peers {
if equalFQDNs(nn.Name(), fqdn) {
if !reflect.DeepEqual(ips, nn.Addresses().AsSlice()) {
log.Printf("backend addresses for egress target %q have changed old IPs %v, new IPs %v trigger egress config resync", nn.Name(), ips, nn.Addresses().AsSlice())
}
return true
}
}
}
return false
}
// ensureServiceDeleted ensures that any rules for an egress service are removed
// from the firewall configuration.
func ensureServiceDeleted(svcName string, svc *egressservices.ServiceStatus, nfr linuxfw.NetfilterRunner) error {
// Note that the portmap is needed for iptables based firewall only.
// Nftables group rules for a service in a chain, so there is no need to
// specify individual portmapping based rules.
pms := make([]linuxfw.PortMap, 0)
for pm := range svc.Ports {
pms = append(pms, linuxfw.PortMap{MatchPort: pm.MatchPort, TargetPort: pm.TargetPort, Protocol: pm.Protocol})
}
if err := nfr.DeleteSvc(svcName, tailscaleTunInterface, svc.TailnetTargetIPs, pms); err != nil {
return fmt.Errorf("error deleting service %s: %w", svcName, err)
}
return nil
}
// ensureRulesAdded ensures that all portmapping rules are added to the firewall
// configuration. For any rules that already exist, calling this function is a
// no-op. In case of nftables, a service consists of one or two (one per IP
// family) chains that conain the portmapping rules for the service and the
// chains as needed when this function is called.
func ensureRulesAdded(rulesPerSvc map[string][]rule, nfr linuxfw.NetfilterRunner) error {
for svc, rules := range rulesPerSvc {
for _, rule := range rules {
log.Printf("ensureRulesAdded svc %s tailnetTarget %s container port %d tailnet port %d protocol %s", svc, rule.tailnetIP, rule.containerPort, rule.tailnetPort, rule.protocol)
if err := nfr.EnsurePortMapRuleForSvc(svc, tailscaleTunInterface, rule.tailnetIP, linuxfw.PortMap{MatchPort: rule.containerPort, TargetPort: rule.tailnetPort, Protocol: rule.protocol}); err != nil {
return fmt.Errorf("error ensuring rule: %w", err)
}
}
}
return nil
}
// ensureRulesDeleted ensures that the given rules are deleted from the firewall
// configuration. For any rules that do not exist, calling this funcion is a
// no-op.
func ensureRulesDeleted(rulesPerSvc map[string][]rule, nfr linuxfw.NetfilterRunner) error {
for svc, rules := range rulesPerSvc {
for _, rule := range rules {
log.Printf("ensureRulesDeleted svc %s tailnetTarget %s container port %d tailnet port %d protocol %s", svc, rule.tailnetIP, rule.containerPort, rule.tailnetPort, rule.protocol)
if err := nfr.DeletePortMapRuleForSvc(svc, tailscaleTunInterface, rule.tailnetIP, linuxfw.PortMap{MatchPort: rule.containerPort, TargetPort: rule.tailnetPort, Protocol: rule.protocol}); err != nil {
return fmt.Errorf("error deleting rule: %w", err)
}
}
}
return nil
}
func lookupCurrentConfig(svcName string, status *egressservices.Status) (*egressservices.ServiceStatus, bool) {
if status == nil || len(status.Services) == 0 {
return nil, false
}
c, ok := status.Services[svcName]
return c, ok
}
func equalFQDNs(s, s1 string) bool {
s, _ = strings.CutSuffix(s, ".")
s1, _ = strings.CutSuffix(s1, ".")
return strings.EqualFold(s, s1)
}
// rule contains configuration for an egress proxy firewall rule.
type rule struct {
containerPort uint16 // port to match incoming traffic
tailnetPort uint16 // tailnet service port
tailnetIP netip.Addr // tailnet service IP
protocol string
}
func wantsServicesConfigured(cfgs *egressservices.Configs) bool {
return cfgs != nil && len(*cfgs) != 0
}
func hasServicesConfigured(status *egressservices.Status) bool {
return status != nil && len(status.Services) != 0
}
func servicesStatusIsEqual(st, st1 *egressservices.Status) bool {
if st == nil && st1 == nil {
return true
}
if st == nil || st1 == nil {
return false
}
st.PodIPv4 = ""
st1.PodIPv4 = ""
return reflect.DeepEqual(*st, *st1)
}

View File

@@ -0,0 +1,175 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"net/netip"
"reflect"
"testing"
"tailscale.com/kube/egressservices"
)
func Test_updatesForSvc(t *testing.T) {
tailnetIPv4, tailnetIPv6 := netip.MustParseAddr("100.99.99.99"), netip.MustParseAddr("fd7a:115c:a1e0::701:b62a")
tailnetIPv4_1, tailnetIPv6_1 := netip.MustParseAddr("100.88.88.88"), netip.MustParseAddr("fd7a:115c:a1e0::4101:512f")
ports := map[egressservices.PortMap]struct{}{{Protocol: "tcp", MatchPort: 4003, TargetPort: 80}: {}}
ports1 := map[egressservices.PortMap]struct{}{{Protocol: "udp", MatchPort: 4004, TargetPort: 53}: {}}
ports2 := map[egressservices.PortMap]struct{}{{Protocol: "tcp", MatchPort: 4003, TargetPort: 80}: {},
{Protocol: "tcp", MatchPort: 4005, TargetPort: 443}: {}}
fqdnSpec := egressservices.Config{
TailnetTarget: egressservices.TailnetTarget{FQDN: "test"},
Ports: ports,
}
fqdnSpec1 := egressservices.Config{
TailnetTarget: egressservices.TailnetTarget{FQDN: "test"},
Ports: ports1,
}
fqdnSpec2 := egressservices.Config{
TailnetTarget: egressservices.TailnetTarget{IP: tailnetIPv4.String()},
Ports: ports,
}
fqdnSpec3 := egressservices.Config{
TailnetTarget: egressservices.TailnetTarget{IP: tailnetIPv4.String()},
Ports: ports2,
}
r := rule{containerPort: 4003, tailnetPort: 80, protocol: "tcp", tailnetIP: tailnetIPv4}
r1 := rule{containerPort: 4003, tailnetPort: 80, protocol: "tcp", tailnetIP: tailnetIPv6}
r2 := rule{tailnetPort: 53, containerPort: 4004, protocol: "udp", tailnetIP: tailnetIPv4}
r3 := rule{tailnetPort: 53, containerPort: 4004, protocol: "udp", tailnetIP: tailnetIPv6}
r4 := rule{containerPort: 4003, tailnetPort: 80, protocol: "tcp", tailnetIP: tailnetIPv4_1}
r5 := rule{containerPort: 4003, tailnetPort: 80, protocol: "tcp", tailnetIP: tailnetIPv6_1}
r6 := rule{containerPort: 4005, tailnetPort: 443, protocol: "tcp", tailnetIP: tailnetIPv4}
tests := []struct {
name string
svcName string
tailnetTargetIPs []netip.Addr
podIP string
spec egressservices.Config
status *egressservices.Status
wantRulesToAdd []rule
wantRulesToDelete []rule
}{
{
name: "add_fqdn_svc_that_does_not_yet_exist",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4, tailnetIPv6},
spec: fqdnSpec,
status: &egressservices.Status{},
wantRulesToAdd: []rule{r, r1},
wantRulesToDelete: []rule{},
},
{
name: "fqdn_svc_already_exists",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4, tailnetIPv6},
spec: fqdnSpec,
status: &egressservices.Status{
Services: map[string]*egressservices.ServiceStatus{"test": {
TailnetTargetIPs: []netip.Addr{tailnetIPv4, tailnetIPv6},
TailnetTarget: egressservices.TailnetTarget{FQDN: "test"},
Ports: ports,
}}},
wantRulesToAdd: []rule{},
wantRulesToDelete: []rule{},
},
{
name: "fqdn_svc_already_exists_add_port_remove_port",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4, tailnetIPv6},
spec: fqdnSpec1,
status: &egressservices.Status{
Services: map[string]*egressservices.ServiceStatus{"test": {
TailnetTargetIPs: []netip.Addr{tailnetIPv4, tailnetIPv6},
TailnetTarget: egressservices.TailnetTarget{FQDN: "test"},
Ports: ports,
}}},
wantRulesToAdd: []rule{r2, r3},
wantRulesToDelete: []rule{r, r1},
},
{
name: "fqdn_svc_already_exists_change_fqdn_backend_ips",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4_1, tailnetIPv6_1},
spec: fqdnSpec,
status: &egressservices.Status{
Services: map[string]*egressservices.ServiceStatus{"test": {
TailnetTargetIPs: []netip.Addr{tailnetIPv4, tailnetIPv6},
TailnetTarget: egressservices.TailnetTarget{FQDN: "test"},
Ports: ports,
}}},
wantRulesToAdd: []rule{r4, r5},
wantRulesToDelete: []rule{r, r1},
},
{
name: "add_ip_service",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4},
spec: fqdnSpec2,
status: &egressservices.Status{},
wantRulesToAdd: []rule{r},
wantRulesToDelete: []rule{},
},
{
name: "add_ip_service_already_exists",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4},
spec: fqdnSpec2,
status: &egressservices.Status{
Services: map[string]*egressservices.ServiceStatus{"test": {
TailnetTargetIPs: []netip.Addr{tailnetIPv4},
TailnetTarget: egressservices.TailnetTarget{IP: tailnetIPv4.String()},
Ports: ports,
}}},
wantRulesToAdd: []rule{},
wantRulesToDelete: []rule{},
},
{
name: "ip_service_add_port",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4},
spec: fqdnSpec3,
status: &egressservices.Status{
Services: map[string]*egressservices.ServiceStatus{"test": {
TailnetTargetIPs: []netip.Addr{tailnetIPv4},
TailnetTarget: egressservices.TailnetTarget{IP: tailnetIPv4.String()},
Ports: ports,
}}},
wantRulesToAdd: []rule{r6},
wantRulesToDelete: []rule{},
},
{
name: "ip_service_delete_port",
svcName: "test",
tailnetTargetIPs: []netip.Addr{tailnetIPv4},
spec: fqdnSpec,
status: &egressservices.Status{
Services: map[string]*egressservices.ServiceStatus{"test": {
TailnetTargetIPs: []netip.Addr{tailnetIPv4},
TailnetTarget: egressservices.TailnetTarget{IP: tailnetIPv4.String()},
Ports: ports2,
}}},
wantRulesToAdd: []rule{},
wantRulesToDelete: []rule{r6},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotRulesToAdd, gotRulesToDelete, err := updatesForCfg(tt.svcName, tt.spec, tt.status, tt.tailnetTargetIPs)
if err != nil {
t.Errorf("updatesForSvc() unexpected error %v", err)
return
}
if !reflect.DeepEqual(gotRulesToAdd, tt.wantRulesToAdd) {
t.Errorf("updatesForSvc() got rulesToAdd = \n%v\n want rulesToAdd \n%v", gotRulesToAdd, tt.wantRulesToAdd)
}
if !reflect.DeepEqual(gotRulesToDelete, tt.wantRulesToDelete) {
t.Errorf("updatesForSvc() got rulesToDelete = \n%v\n want rulesToDelete \n%v", gotRulesToDelete, tt.wantRulesToDelete)
}
})
}
}

View File

@@ -14,6 +14,7 @@ import (
"os"
"path"
"strconv"
"strings"
"tailscale.com/ipn/conffile"
"tailscale.com/kube/kubeclient"
@@ -62,8 +63,75 @@ type settings struct {
// PodIP is the IP of the Pod if running in Kubernetes. This is used
// when setting up rules to proxy cluster traffic to cluster ingress
// target.
// Deprecated: use PodIPv4, PodIPv6 instead to support dual stack clusters
PodIP string
PodIPv4 string
PodIPv6 string
PodUID string
HealthCheckAddrPort string
LocalAddrPort string
MetricsEnabled bool
HealthCheckEnabled bool
DebugAddrPort string
EgressSvcsCfgPath string
}
func configFromEnv() (*settings, error) {
cfg := &settings{
AuthKey: defaultEnvs([]string{"TS_AUTHKEY", "TS_AUTH_KEY"}, ""),
Hostname: defaultEnv("TS_HOSTNAME", ""),
Routes: defaultEnvStringPointer("TS_ROUTES"),
ServeConfigPath: defaultEnv("TS_SERVE_CONFIG", ""),
ProxyTargetIP: defaultEnv("TS_DEST_IP", ""),
ProxyTargetDNSName: defaultEnv("TS_EXPERIMENTAL_DEST_DNS_NAME", ""),
TailnetTargetIP: defaultEnv("TS_TAILNET_TARGET_IP", ""),
TailnetTargetFQDN: defaultEnv("TS_TAILNET_TARGET_FQDN", ""),
DaemonExtraArgs: defaultEnv("TS_TAILSCALED_EXTRA_ARGS", ""),
ExtraArgs: defaultEnv("TS_EXTRA_ARGS", ""),
InKubernetes: os.Getenv("KUBERNETES_SERVICE_HOST") != "",
UserspaceMode: defaultBool("TS_USERSPACE", true),
StateDir: defaultEnv("TS_STATE_DIR", ""),
AcceptDNS: defaultEnvBoolPointer("TS_ACCEPT_DNS"),
KubeSecret: defaultEnv("TS_KUBE_SECRET", "tailscale"),
SOCKSProxyAddr: defaultEnv("TS_SOCKS5_SERVER", ""),
HTTPProxyAddr: defaultEnv("TS_OUTBOUND_HTTP_PROXY_LISTEN", ""),
Socket: defaultEnv("TS_SOCKET", "/tmp/tailscaled.sock"),
AuthOnce: defaultBool("TS_AUTH_ONCE", false),
Root: defaultEnv("TS_TEST_ONLY_ROOT", "/"),
TailscaledConfigFilePath: tailscaledConfigFilePath(),
AllowProxyingClusterTrafficViaIngress: defaultBool("EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS", false),
PodIP: defaultEnv("POD_IP", ""),
EnableForwardingOptimizations: defaultBool("TS_EXPERIMENTAL_ENABLE_FORWARDING_OPTIMIZATIONS", false),
HealthCheckAddrPort: defaultEnv("TS_HEALTHCHECK_ADDR_PORT", ""),
LocalAddrPort: defaultEnv("TS_LOCAL_ADDR_PORT", "[::]:9002"),
MetricsEnabled: defaultBool("TS_ENABLE_METRICS", false),
HealthCheckEnabled: defaultBool("TS_ENABLE_HEALTH_CHECK", false),
DebugAddrPort: defaultEnv("TS_DEBUG_ADDR_PORT", ""),
EgressSvcsCfgPath: defaultEnv("TS_EGRESS_SERVICES_CONFIG_PATH", ""),
PodUID: defaultEnv("POD_UID", ""),
}
podIPs, ok := os.LookupEnv("POD_IPS")
if ok {
ips := strings.Split(podIPs, ",")
if len(ips) > 2 {
return nil, fmt.Errorf("POD_IPs can contain at most 2 IPs, got %d (%v)", len(ips), ips)
}
for _, ip := range ips {
parsed, err := netip.ParseAddr(ip)
if err != nil {
return nil, fmt.Errorf("error parsing IP address %s: %w", ip, err)
}
if parsed.Is4() {
cfg.PodIPv4 = parsed.String()
continue
}
cfg.PodIPv6 = parsed.String()
}
}
if err := cfg.validate(); err != nil {
return nil, fmt.Errorf("invalid configuration: %v", err)
}
return cfg, nil
}
func (s *settings) validate() error {
@@ -113,60 +181,85 @@ func (s *settings) validate() error {
return errors.New("TS_EXPERIMENTAL_ENABLE_FORWARDING_OPTIMIZATIONS is not supported in userspace mode")
}
if s.HealthCheckAddrPort != "" {
log.Printf("[warning] TS_HEALTHCHECK_ADDR_PORT is deprecated and will be removed in 1.82.0. Please use TS_ENABLE_HEALTH_CHECK and optionally TS_LOCAL_ADDR_PORT instead.")
if _, err := netip.ParseAddrPort(s.HealthCheckAddrPort); err != nil {
return fmt.Errorf("error parsing TS_HEALTH_CHECK_ADDR_PORT value %q: %w", s.HealthCheckAddrPort, err)
return fmt.Errorf("error parsing TS_HEALTHCHECK_ADDR_PORT value %q: %w", s.HealthCheckAddrPort, err)
}
}
if s.localMetricsEnabled() || s.localHealthEnabled() {
if _, err := netip.ParseAddrPort(s.LocalAddrPort); err != nil {
return fmt.Errorf("error parsing TS_LOCAL_ADDR_PORT value %q: %w", s.LocalAddrPort, err)
}
}
if s.DebugAddrPort != "" {
if _, err := netip.ParseAddrPort(s.DebugAddrPort); err != nil {
return fmt.Errorf("error parsing TS_DEBUG_ADDR_PORT value %q: %w", s.DebugAddrPort, err)
}
}
if s.HealthCheckEnabled && s.HealthCheckAddrPort != "" {
return errors.New("TS_HEALTHCHECK_ADDR_PORT is deprecated and will be removed in 1.82.0, use TS_ENABLE_HEALTH_CHECK and optionally TS_LOCAL_ADDR_PORT")
}
if s.EgressSvcsCfgPath != "" && !(s.InKubernetes && s.KubeSecret != "") {
return errors.New("TS_EGRESS_SERVICES_CONFIG_PATH is only supported for Tailscale running on Kubernetes")
}
return nil
}
// setupKube is responsible for doing any necessary configuration and checks to
// ensure that tailscale state storage and authentication mechanism will work on
// Kubernetes.
func (cfg *settings) setupKube(ctx context.Context) error {
func (cfg *settings) setupKube(ctx context.Context, kc *kubeClient) error {
if cfg.KubeSecret == "" {
return nil
}
canPatch, canCreate, err := kc.CheckSecretPermissions(ctx, cfg.KubeSecret)
if err != nil {
return fmt.Errorf("Some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
return fmt.Errorf("some Kubernetes permissions are missing, please check your RBAC configuration: %v", err)
}
cfg.KubernetesCanPatch = canPatch
kc.canPatch = canPatch
s, err := kc.GetSecret(ctx, cfg.KubeSecret)
if err != nil && kubeclient.IsNotFoundErr(err) && !canCreate {
return fmt.Errorf("Tailscale state Secret %s does not exist and we don't have permissions to create it. "+
"If you intend to store tailscale state elsewhere than a Kubernetes Secret, "+
"you can explicitly set TS_KUBE_SECRET env var to an empty string. "+
"Else ensure that RBAC is set up that allows the service account associated with this installation to create Secrets.", cfg.KubeSecret)
} else if err != nil && !kubeclient.IsNotFoundErr(err) {
return fmt.Errorf("Getting Tailscale state Secret %s: %v", cfg.KubeSecret, err)
}
if cfg.AuthKey == "" && !isOneStepConfig(cfg) {
if s == nil {
log.Print("TS_AUTHKEY not provided and kube secret does not exist, login will be interactive if needed.")
return nil
if err != nil {
if !kubeclient.IsNotFoundErr(err) {
return fmt.Errorf("getting Tailscale state Secret %s: %v", cfg.KubeSecret, err)
}
keyBytes, _ := s.Data["authkey"]
key := string(keyBytes)
if key != "" {
// This behavior of pulling authkeys from kube secrets was added
// at the same time as the patch permission, so we can enforce
// that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
return errors.New("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the secret to manage the authkey.")
}
cfg.AuthKey = key
} else {
log.Print("No authkey found in kube secret and TS_AUTHKEY not provided, login will be interactive if needed.")
if !canCreate {
return fmt.Errorf("tailscale state Secret %s does not exist and we don't have permissions to create it. "+
"If you intend to store tailscale state elsewhere than a Kubernetes Secret, "+
"you can explicitly set TS_KUBE_SECRET env var to an empty string. "+
"Else ensure that RBAC is set up that allows the service account associated with this installation to create Secrets.", cfg.KubeSecret)
}
}
// Return early if we already have an auth key.
if cfg.AuthKey != "" || isOneStepConfig(cfg) {
return nil
}
if s == nil {
log.Print("TS_AUTHKEY not provided and state Secret does not exist, login will be interactive if needed.")
return nil
}
keyBytes, _ := s.Data["authkey"]
key := string(keyBytes)
if key != "" {
// Enforce that we must be able to patch out the authkey after
// authenticating if you want to use this feature. This avoids
// us having to deal with the case where we might leave behind
// an unnecessary reusable authkey in a secret, like a rake in
// the grass.
if !cfg.KubernetesCanPatch {
return errors.New("authkey found in TS_KUBE_SECRET, but the pod doesn't have patch permissions on the Secret to manage the authkey.")
}
cfg.AuthKey = key
}
log.Print("No authkey found in state Secret and TS_AUTHKEY not provided, login will be interactive if needed.")
return nil
}
@@ -198,7 +291,7 @@ func isOneStepConfig(cfg *settings) bool {
// as an L3 proxy, proxying to an endpoint provided via one of the config env
// vars.
func isL3Proxy(cfg *settings) bool {
return cfg.ProxyTargetIP != "" || cfg.ProxyTargetDNSName != "" || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" || cfg.AllowProxyingClusterTrafficViaIngress
return cfg.ProxyTargetIP != "" || cfg.ProxyTargetDNSName != "" || cfg.TailnetTargetIP != "" || cfg.TailnetTargetFQDN != "" || cfg.AllowProxyingClusterTrafficViaIngress || cfg.EgressSvcsCfgPath != ""
}
// hasKubeStateStore returns true if the state must be stored in a Kubernetes
@@ -207,6 +300,14 @@ func hasKubeStateStore(cfg *settings) bool {
return cfg.InKubernetes && cfg.KubernetesCanPatch && cfg.KubeSecret != ""
}
func (cfg *settings) localMetricsEnabled() bool {
return cfg.LocalAddrPort != "" && cfg.MetricsEnabled
}
func (cfg *settings) localHealthEnabled() bool {
return cfg.LocalAddrPort != "" && cfg.HealthCheckEnabled
}
// defaultEnv returns the value of the given envvar name, or defVal if
// unset.
func defaultEnv(name, defVal string) string {

View File

@@ -90,6 +90,12 @@ func tailscaledArgs(cfg *settings) []string {
if cfg.TailscaledConfigFilePath != "" {
args = append(args, "--config="+cfg.TailscaledConfigFilePath)
}
// Once enough proxy versions have been released for all the supported
// versions to understand this cfg setting, the operator can stop
// setting TS_TAILSCALED_EXTRA_ARGS for the debug flag.
if cfg.DebugAddrPort != "" && !strings.Contains(cfg.DaemonExtraArgs, cfg.DebugAddrPort) {
args = append(args, "--debug="+cfg.DebugAddrPort)
}
if cfg.DaemonExtraArgs != "" {
args = append(args, strings.Fields(cfg.DaemonExtraArgs)...)
}

View File

@@ -20,10 +20,10 @@ import (
)
func BenchmarkHandleBootstrapDNS(b *testing.B) {
tstest.Replace(b, bootstrapDNS, "log.tailscale.io,login.tailscale.com,controlplane.tailscale.com,login.us.tailscale.com")
tstest.Replace(b, bootstrapDNS, "log.tailscale.com,login.tailscale.com,controlplane.tailscale.com,login.us.tailscale.com")
refreshBootstrapDNS()
w := new(bitbucketResponseWriter)
req, _ := http.NewRequest("GET", "https://localhost/bootstrap-dns?q="+url.QueryEscape("log.tailscale.io"), nil)
req, _ := http.NewRequest("GET", "https://localhost/bootstrap-dns?q="+url.QueryEscape("log.tailscale.com"), nil)
b.ReportAllocs()
b.ResetTimer()
b.RunParallel(func(b *testing.PB) {
@@ -63,7 +63,7 @@ func TestUnpublishedDNS(t *testing.T) {
nettest.SkipIfNoNetwork(t)
const published = "login.tailscale.com"
const unpublished = "log.tailscale.io"
const unpublished = "log.tailscale.com"
prev1, prev2 := *bootstrapDNS, *unpublishedDNS
*bootstrapDNS = published
@@ -119,18 +119,18 @@ func TestUnpublishedDNSEmptyList(t *testing.T) {
unpublishedDNSCache.Store(&dnsEntryMap{
IPs: map[string][]net.IP{
"log.tailscale.io": {},
"log.tailscale.com": {},
"controlplane.tailscale.com": {net.IPv4(1, 2, 3, 4)},
},
Percent: map[string]float64{
"log.tailscale.io": 1.0,
"log.tailscale.com": 1.0,
"controlplane.tailscale.com": 1.0,
},
})
t.Run("CacheMiss", func(t *testing.T) {
// One domain in map but empty, one not in map at all
for _, q := range []string{"log.tailscale.io", "login.tailscale.com"} {
for _, q := range []string{"log.tailscale.com", "login.tailscale.com"} {
resetMetrics()
ips := getBootstrapDNS(t, q)

View File

@@ -8,6 +8,7 @@ import (
"crypto/x509"
"errors"
"fmt"
"net"
"net/http"
"path/filepath"
"regexp"
@@ -53,8 +54,9 @@ func certProviderByCertMode(mode, dir, hostname string) (certProvider, error) {
}
type manualCertManager struct {
cert *tls.Certificate
hostname string
cert *tls.Certificate
hostname string // hostname or IP address of server
noHostname bool // whether hostname is an IP address
}
// NewManualCertManager returns a cert provider which read certificate by given hostname on create.
@@ -74,7 +76,11 @@ func NewManualCertManager(certdir, hostname string) (certProvider, error) {
if err := x509Cert.VerifyHostname(hostname); err != nil {
return nil, fmt.Errorf("cert invalid for hostname %q: %w", hostname, err)
}
return &manualCertManager{cert: &cert, hostname: hostname}, nil
return &manualCertManager{
cert: &cert,
hostname: hostname,
noHostname: net.ParseIP(hostname) != nil,
}, nil
}
func (m *manualCertManager) TLSConfig() *tls.Config {
@@ -88,7 +94,7 @@ func (m *manualCertManager) TLSConfig() *tls.Config {
}
func (m *manualCertManager) getCertificate(hi *tls.ClientHelloInfo) (*tls.Certificate, error) {
if hi.ServerName != m.hostname {
if hi.ServerName != m.hostname && !m.noHostname {
return nil, fmt.Errorf("cert mismatch with hostname: %q", hi.ServerName)
}

97
cmd/derper/cert_test.go Normal file
View File

@@ -0,0 +1,97 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"math/big"
"net"
"os"
"path/filepath"
"testing"
"time"
)
// Verify that in --certmode=manual mode, we can use a bare IP address
// as the --hostname and that GetCertificate will return it.
func TestCertIP(t *testing.T) {
dir := t.TempDir()
const hostname = "1.2.3.4"
priv, err := ecdsa.GenerateKey(elliptic.P224(), rand.Reader)
if err != nil {
t.Fatal(err)
}
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
t.Fatal(err)
}
ip := net.ParseIP(hostname)
if ip == nil {
t.Fatalf("invalid IP address %q", hostname)
}
template := &x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{"Tailscale Test Corp"},
},
NotBefore: time.Now(),
NotAfter: time.Now().Add(30 * 24 * time.Hour),
KeyUsage: x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
IPAddresses: []net.IP{ip},
}
derBytes, err := x509.CreateCertificate(rand.Reader, template, template, &priv.PublicKey, priv)
if err != nil {
t.Fatal(err)
}
certOut, err := os.Create(filepath.Join(dir, hostname+".crt"))
if err != nil {
t.Fatal(err)
}
if err := pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes}); err != nil {
t.Fatalf("Failed to write data to cert.pem: %v", err)
}
if err := certOut.Close(); err != nil {
t.Fatalf("Error closing cert.pem: %v", err)
}
keyOut, err := os.OpenFile(filepath.Join(dir, hostname+".key"), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
t.Fatal(err)
}
privBytes, err := x509.MarshalPKCS8PrivateKey(priv)
if err != nil {
t.Fatalf("Unable to marshal private key: %v", err)
}
if err := pem.Encode(keyOut, &pem.Block{Type: "PRIVATE KEY", Bytes: privBytes}); err != nil {
t.Fatalf("Failed to write data to key.pem: %v", err)
}
if err := keyOut.Close(); err != nil {
t.Fatalf("Error closing key.pem: %v", err)
}
cp, err := certProviderByCertMode("manual", dir, hostname)
if err != nil {
t.Fatal(err)
}
back, err := cp.TLSConfig().GetCertificate(&tls.ClientHelloInfo{
ServerName: "", // no SNI
})
if err != nil {
t.Fatalf("GetCertificate: %v", err)
}
if back == nil {
t.Fatalf("GetCertificate returned nil")
}
}

View File

@@ -27,7 +27,6 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
L github.com/google/nftables/expr from github.com/google/nftables+
L github.com/google/nftables/internal/parseexprfunc from github.com/google/nftables+
L github.com/google/nftables/xt from github.com/google/nftables/expr+
github.com/google/uuid from tailscale.com/util/fastuuid
github.com/hdevalence/ed25519consensus from tailscale.com/tka
L github.com/josharian/native from github.com/mdlayher/netlink+
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
@@ -113,9 +112,10 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/net/stunserver from tailscale.com/cmd/derper
L tailscale.com/net/tcpinfo from tailscale.com/derp
tailscale.com/net/tlsdial from tailscale.com/derp/derphttp
tailscale.com/net/tlsdial/blockblame from tailscale.com/net/tlsdial
tailscale.com/net/tsaddr from tailscale.com/ipn+
💣 tailscale.com/net/tshttpproxy from tailscale.com/derp/derphttp+
tailscale.com/net/wsconn from tailscale.com/cmd/derper+
tailscale.com/net/wsconn from tailscale.com/cmd/derper
tailscale.com/paths from tailscale.com/client/tailscale
💣 tailscale.com/safesocket from tailscale.com/client/tailscale
tailscale.com/syncs from tailscale.com/cmd/derper+
@@ -139,6 +139,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/types/persist from tailscale.com/ipn
tailscale.com/types/preftype from tailscale.com/ipn
tailscale.com/types/ptr from tailscale.com/hostinfo+
tailscale.com/types/result from tailscale.com/util/lineiter
tailscale.com/types/structs from tailscale.com/ipn+
tailscale.com/types/tkatype from tailscale.com/client/tailscale+
tailscale.com/types/views from tailscale.com/ipn+
@@ -150,23 +151,29 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
💣 tailscale.com/util/deephash from tailscale.com/util/syspolicy/setting
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/hostinfo+
tailscale.com/util/fastuuid from tailscale.com/tsweb
💣 tailscale.com/util/hashx from tailscale.com/util/deephash
tailscale.com/util/httpm from tailscale.com/client/tailscale
tailscale.com/util/lineread from tailscale.com/hostinfo+
tailscale.com/util/lineiter from tailscale.com/hostinfo+
L tailscale.com/util/linuxfw from tailscale.com/net/netns
tailscale.com/util/mak from tailscale.com/health+
tailscale.com/util/multierr from tailscale.com/health+
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/rands from tailscale.com/tsweb
tailscale.com/util/set from tailscale.com/derp+
tailscale.com/util/singleflight from tailscale.com/net/dnscache
tailscale.com/util/slicesx from tailscale.com/cmd/derper+
tailscale.com/util/syspolicy from tailscale.com/ipn
tailscale.com/util/syspolicy/internal from tailscale.com/util/syspolicy/setting
tailscale.com/util/syspolicy/setting from tailscale.com/util/syspolicy
tailscale.com/util/syspolicy/internal from tailscale.com/util/syspolicy/setting+
tailscale.com/util/syspolicy/internal/loggerx from tailscale.com/util/syspolicy/internal/metrics+
tailscale.com/util/syspolicy/internal/metrics from tailscale.com/util/syspolicy/source
tailscale.com/util/syspolicy/rsop from tailscale.com/util/syspolicy
tailscale.com/util/syspolicy/setting from tailscale.com/util/syspolicy+
tailscale.com/util/syspolicy/source from tailscale.com/util/syspolicy+
tailscale.com/util/testenv from tailscale.com/util/syspolicy+
tailscale.com/util/usermetric from tailscale.com/health
tailscale.com/util/vizerror from tailscale.com/tailcfg+
W 💣 tailscale.com/util/winutil from tailscale.com/hostinfo+
W 💣 tailscale.com/util/winutil/gp from tailscale.com/util/syspolicy/source
W 💣 tailscale.com/util/winutil/winenv from tailscale.com/hostinfo+
tailscale.com/version from tailscale.com/derp+
tailscale.com/version/distro from tailscale.com/envknob+
@@ -187,7 +194,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
golang.org/x/crypto/salsa20/salsa from golang.org/x/crypto/nacl/box+
golang.org/x/crypto/sha3 from crypto/internal/mlkem768+
W golang.org/x/exp/constraints from tailscale.com/util/winutil
golang.org/x/exp/maps from tailscale.com/util/syspolicy/setting
golang.org/x/exp/maps from tailscale.com/util/syspolicy/setting+
L golang.org/x/net/bpf from github.com/mdlayher/netlink+
golang.org/x/net/dns/dnsmessage from net+
golang.org/x/net/http/httpguts from net/http
@@ -236,7 +243,6 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
crypto/tls from golang.org/x/crypto/acme+
crypto/x509 from crypto/tls+
crypto/x509/pkix from crypto/x509+
database/sql/driver from github.com/google/uuid
embed from crypto/internal/nistec+
encoding from encoding/json+
encoding/asn1 from crypto/x509+
@@ -248,7 +254,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
encoding/pem from crypto/tls+
errors from bufio+
expvar from github.com/prometheus/client_golang/prometheus+
flag from tailscale.com/cmd/derper
flag from tailscale.com/cmd/derper+
fmt from compress/flate+
go/token from google.golang.org/protobuf/internal/strs
hash from crypto+
@@ -256,6 +262,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
hash/fnv from google.golang.org/protobuf/internal/detrand
hash/maphash from go4.org/mem
html from net/http/pprof+
html/template from tailscale.com/cmd/derper
io from bufio+
io/fs from crypto/x509+
io/ioutil from github.com/mitchellh/go-ps+
@@ -267,7 +274,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
math/big from crypto/dsa+
math/bits from compress/flate+
math/rand from github.com/mdlayher/netlink+
math/rand/v2 from tailscale.com/util/fastuuid+
math/rand/v2 from internal/concurrent+
mime from github.com/prometheus/common/expfmt+
mime/multipart from net/http
mime/quotedprintable from mime/multipart
@@ -282,7 +289,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
os from crypto/rand+
os/exec from github.com/coreos/go-iptables/iptables+
os/signal from tailscale.com/cmd/derper
W os/user from tailscale.com/util/winutil
W os/user from tailscale.com/util/winutil+
path from github.com/prometheus/client_golang/prometheus/internal+
path/filepath from crypto/x509+
reflect from crypto/x509+
@@ -300,6 +307,8 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
sync/atomic from context+
syscall from crypto/rand+
text/tabwriter from runtime/pprof
text/template from html/template
text/template/parse from html/template+
time from compress/gzip+
unicode from bytes+
unicode/utf16 from crypto/x509+

View File

@@ -19,6 +19,7 @@ import (
"expvar"
"flag"
"fmt"
"html/template"
"io"
"log"
"math"
@@ -57,7 +58,7 @@ var (
configPath = flag.String("c", "", "config file path")
certMode = flag.String("certmode", "letsencrypt", "mode for getting a cert. possible options: manual, letsencrypt")
certDir = flag.String("certdir", tsweb.DefaultCertDir("derper-certs"), "directory to store LetsEncrypt certs, if addr's port is :443")
hostname = flag.String("hostname", "derp.tailscale.com", "LetsEncrypt host name, if addr's port is :443")
hostname = flag.String("hostname", "derp.tailscale.com", "LetsEncrypt host name, if addr's port is :443. When --certmode=manual, this can be an IP address to avoid SNI checks")
runSTUN = flag.Bool("stun", true, "whether to run a STUN server. It will bind to the same IP (if any) as the --addr flag value.")
runDERP = flag.Bool("derp", true, "whether to run a DERP server. The only reason to set this false is if you're decommissioning a server but want to keep its bootstrap DNS functionality still running.")
@@ -212,25 +213,16 @@ func main() {
tsweb.AddBrowserHeaders(w)
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.WriteHeader(200)
io.WriteString(w, `<html><body>
<h1>DERP</h1>
<p>
This is a <a href="https://tailscale.com/">Tailscale</a> DERP server.
</p>
<p>
Documentation:
</p>
<ul>
<li><a href="https://tailscale.com/kb/1232/derp-servers">About DERP</a></li>
<li><a href="https://pkg.go.dev/tailscale.com/derp">Protocol & Go docs</a></li>
<li><a href="https://github.com/tailscale/tailscale/tree/main/cmd/derper#derp">How to run a DERP server</a></li>
</ul>
`)
if !*runDERP {
io.WriteString(w, `<p>Status: <b>disabled</b></p>`)
}
if tsweb.AllowDebugAccess(r) {
io.WriteString(w, "<p>Debug info at <a href='/debug/'>/debug/</a>.</p>\n")
err := homePageTemplate.Execute(w, templateData{
ShowAbuseInfo: validProdHostname.MatchString(*hostname),
Disabled: !*runDERP,
AllowDebug: tsweb.AllowDebugAccess(r),
})
if err != nil {
if r.Context().Err() == nil {
log.Printf("homePageTemplate.Execute: %v", err)
}
return
}
}))
mux.Handle("/robots.txt", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -468,3 +460,52 @@ func init() {
return 0
}))
}
type templateData struct {
ShowAbuseInfo bool
Disabled bool
AllowDebug bool
}
// homePageTemplate renders the home page using [templateData].
var homePageTemplate = template.Must(template.New("home").Parse(`<html><body>
<h1>DERP</h1>
<p>
This is a <a href="https://tailscale.com/">Tailscale</a> DERP server.
</p>
<p>
It provides STUN, interactive connectivity establishment, and relaying of end-to-end encrypted traffic
for Tailscale clients.
</p>
{{if .ShowAbuseInfo }}
<p>
If you suspect abuse, please contact <a href="mailto:security@tailscale.com">security@tailscale.com</a>.
</p>
{{end}}
<p>
Documentation:
</p>
<ul>
{{if .ShowAbuseInfo }}
<li><a href="https://tailscale.com/security-policies">Tailscale Security Policies</a></li>
<li><a href="https://tailscale.com/tailscale-aup">Tailscale Acceptable Use Policies</a></li>
{{end}}
<li><a href="https://tailscale.com/kb/1232/derp-servers">About DERP</a></li>
<li><a href="https://pkg.go.dev/tailscale.com/derp">Protocol & Go docs</a></li>
<li><a href="https://github.com/tailscale/tailscale/tree/main/cmd/derper#derp">How to run a DERP server</a></li>
</ul>
{{if .Disabled}}
<p>Status: <b>disabled</b></p>
{{end}}
{{if .AllowDebug}}
<p>Debug info at <a href='/debug/'>/debug/</a>.</p>
{{end}}
</body>
</html>
`))

View File

@@ -4,6 +4,7 @@
package main
import (
"bytes"
"context"
"net/http"
"net/http/httptest"
@@ -107,6 +108,33 @@ func TestDeps(t *testing.T) {
"gvisor.dev/gvisor/pkg/tcpip/header": "https://github.com/tailscale/tailscale/issues/9756",
"tailscale.com/net/packet": "not needed in derper",
"github.com/gaissmai/bart": "not needed in derper",
"database/sql/driver": "not needed in derper", // previously came in via github.com/google/uuid
},
}.Check(t)
}
func TestTemplate(t *testing.T) {
buf := &bytes.Buffer{}
err := homePageTemplate.Execute(buf, templateData{
ShowAbuseInfo: true,
Disabled: true,
AllowDebug: true,
})
if err != nil {
t.Fatal(err)
}
str := buf.String()
if !strings.Contains(str, "If you suspect abuse") {
t.Error("Output is missing abuse mailto")
}
if !strings.Contains(str, "Tailscale Security Policies") {
t.Error("Output is missing Tailscale Security Policies link")
}
if !strings.Contains(str, "Status:") {
t.Error("Output is missing disabled status")
}
if !strings.Contains(str, "Debug info") {
t.Error("Output is missing debug info")
}
}

View File

@@ -47,6 +47,7 @@ func startMeshWithHost(s *derp.Server, host string) error {
c.SetURLDialer(func(ctx context.Context, network, addr string) (net.Conn, error) {
host, port, err := net.SplitHostPort(addr)
if err != nil {
logf("failed to split %q: %v", addr, err)
return nil, err
}
var d net.Dialer
@@ -55,15 +56,18 @@ func startMeshWithHost(s *derp.Server, host string) error {
subCtx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
vpcHost := base + "-vpc.tailscale.com"
ips, _ := r.LookupIP(subCtx, "ip", vpcHost)
ips, err := r.LookupIP(subCtx, "ip", vpcHost)
if err != nil {
logf("failed to resolve %v: %v", vpcHost, err)
}
if len(ips) > 0 {
vpcAddr := net.JoinHostPort(ips[0].String(), port)
c, err := d.DialContext(subCtx, network, vpcAddr)
if err == nil {
log.Printf("connected to %v (%v) instead of %v", vpcHost, ips[0], base)
logf("connected to %v (%v) instead of %v", vpcHost, ips[0], base)
return c, nil
}
log.Printf("failed to connect to %v (%v): %v; trying non-VPC route", vpcHost, ips[0], err)
logf("failed to connect to %v (%v): %v; trying non-VPC route", vpcHost, ips[0], err)
}
}
return d.DialContext(ctx, network, addr)

View File

@@ -18,17 +18,21 @@ import (
)
var (
derpMapURL = flag.String("derp-map", "https://login.tailscale.com/derpmap/default", "URL to DERP map (https:// or file://) or 'local' to use the local tailscaled's DERP map")
versionFlag = flag.Bool("version", false, "print version and exit")
listen = flag.String("listen", ":8030", "HTTP listen address")
probeOnce = flag.Bool("once", false, "probe once and print results, then exit; ignores the listen flag")
spread = flag.Bool("spread", true, "whether to spread probing over time")
interval = flag.Duration("interval", 15*time.Second, "probe interval")
meshInterval = flag.Duration("mesh-interval", 15*time.Second, "mesh probe interval")
stunInterval = flag.Duration("stun-interval", 15*time.Second, "STUN probe interval")
tlsInterval = flag.Duration("tls-interval", 15*time.Second, "TLS probe interval")
bwInterval = flag.Duration("bw-interval", 0, "bandwidth probe interval (0 = no bandwidth probing)")
bwSize = flag.Int64("bw-probe-size-bytes", 1_000_000, "bandwidth probe size")
derpMapURL = flag.String("derp-map", "https://login.tailscale.com/derpmap/default", "URL to DERP map (https:// or file://) or 'local' to use the local tailscaled's DERP map")
versionFlag = flag.Bool("version", false, "print version and exit")
listen = flag.String("listen", ":8030", "HTTP listen address")
probeOnce = flag.Bool("once", false, "probe once and print results, then exit; ignores the listen flag")
spread = flag.Bool("spread", true, "whether to spread probing over time")
interval = flag.Duration("interval", 15*time.Second, "probe interval")
meshInterval = flag.Duration("mesh-interval", 15*time.Second, "mesh probe interval")
stunInterval = flag.Duration("stun-interval", 15*time.Second, "STUN probe interval")
tlsInterval = flag.Duration("tls-interval", 15*time.Second, "TLS probe interval")
bwInterval = flag.Duration("bw-interval", 0, "bandwidth probe interval (0 = no bandwidth probing)")
bwSize = flag.Int64("bw-probe-size-bytes", 1_000_000, "bandwidth probe size")
bwTUNIPv4Address = flag.String("bw-tun-ipv4-addr", "", "if specified, bandwidth probes will be performed over a TUN device at this address in order to exercise TCP-in-TCP in similar fashion to TCP over Tailscale via DERP; we will use a /30 subnet including this IP address")
qdPacketsPerSecond = flag.Int("qd-packets-per-second", 0, "if greater than 0, queuing delay will be measured continuously using 260 byte packets (approximate size of a CallMeMaybe packet) sent at this rate per second")
qdPacketTimeout = flag.Duration("qd-packet-timeout", 5*time.Second, "queuing delay packets arriving after this period of time from being sent are treated like dropped packets and don't count toward queuing delay timings")
regionCode = flag.String("region-code", "", "probe only this region (e.g. 'lax'); if left blank, all regions will be probed")
)
func main() {
@@ -43,9 +47,13 @@ func main() {
prober.WithMeshProbing(*meshInterval),
prober.WithSTUNProbing(*stunInterval),
prober.WithTLSProbing(*tlsInterval),
prober.WithQueuingDelayProbing(*qdPacketsPerSecond, *qdPacketTimeout),
}
if *bwInterval > 0 {
opts = append(opts, prober.WithBandwidthProbing(*bwInterval, *bwSize))
opts = append(opts, prober.WithBandwidthProbing(*bwInterval, *bwSize, *bwTUNIPv4Address))
}
if *regionCode != "" {
opts = append(opts, prober.WithRegion(*regionCode))
}
dp, err := prober.DERP(p, *derpMapURL, opts...)
if err != nil {
@@ -75,6 +83,11 @@ func main() {
prober.WithPageLink("Prober metrics", "/debug/varz"),
prober.WithProbeLink("Run Probe", "/debug/probe-run?name={{.Name}}"),
), tsweb.HandlerOptions{Logf: log.Printf}))
mux.Handle("/healthz", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte("ok\n"))
}))
log.Printf("Listening on %s", *listen)
log.Fatal(http.ListenAndServe(*listen, mux))
}
@@ -97,7 +110,7 @@ func getOverallStatus(p *prober.Prober) (o overallStatus) {
// Do not show probes that have not finished yet.
continue
}
if i.Result {
if i.Status == prober.ProbeStatusSucceeded {
o.addGoodf("%s: %s", p, i.Latency)
} else {
o.addBadf("%s: %s", p, i.Error)

View File

@@ -46,11 +46,11 @@ func main() {
ClientID: clientID,
ClientSecret: clientSecret,
TokenURL: baseURL + "/api/v2/oauth/token",
Scopes: []string{"device"},
}
ctx := context.Background()
tsClient := tailscale.NewClient("-", nil)
tsClient.UserAgent = "tailscale-get-authkey"
tsClient.HTTPClient = credentials.Client(ctx)
tsClient.BaseURL = baseURL

View File

@@ -58,8 +58,8 @@ func apply(cache *Cache, client *http.Client, tailnet, apiKey string) func(conte
}
if cache.PrevETag == "" {
log.Println("no previous etag found, assuming local file is correct and recording that")
cache.PrevETag = localEtag
log.Println("no previous etag found, assuming the latest control etag")
cache.PrevETag = controlEtag
}
log.Printf("control: %s", controlEtag)
@@ -105,8 +105,8 @@ func test(cache *Cache, client *http.Client, tailnet, apiKey string) func(contex
}
if cache.PrevETag == "" {
log.Println("no previous etag found, assuming local file is correct and recording that")
cache.PrevETag = localEtag
log.Println("no previous etag found, assuming the latest control etag")
cache.PrevETag = controlEtag
}
log.Printf("control: %s", controlEtag)
@@ -148,8 +148,8 @@ func getChecksums(cache *Cache, client *http.Client, tailnet, apiKey string) fun
}
if cache.PrevETag == "" {
log.Println("no previous etag found, assuming local file is correct and recording that")
cache.PrevETag = Shuck(localEtag)
log.Println("no previous etag found, assuming control etag")
cache.PrevETag = Shuck(controlEtag)
}
log.Printf("control: %s", controlEtag)

View File

@@ -10,10 +10,12 @@ import (
"fmt"
"net/netip"
"slices"
"strings"
"sync"
"time"
"github.com/pkg/errors"
"errors"
"go.uber.org/zap"
xslices "golang.org/x/exp/slices"
corev1 "k8s.io/api/core/v1"
@@ -34,6 +36,7 @@ import (
const (
reasonConnectorCreationFailed = "ConnectorCreationFailed"
reasonConnectorCreating = "ConnectorCreating"
reasonConnectorCreated = "ConnectorCreated"
reasonConnectorInvalid = "ConnectorInvalid"
@@ -58,6 +61,7 @@ type ConnectorReconciler struct {
subnetRouters set.Slice[types.UID] // for subnet routers gauge
exitNodes set.Slice[types.UID] // for exit nodes gauge
appConnectors set.Slice[types.UID] // for app connectors gauge
}
var (
@@ -67,6 +71,8 @@ var (
gaugeConnectorSubnetRouterResources = clientmetric.NewGauge(kubetypes.MetricConnectorWithSubnetRouterCount)
// gaugeConnectorExitNodeResources tracks the number of Connectors currently managed by this operator instance that are exit nodes.
gaugeConnectorExitNodeResources = clientmetric.NewGauge(kubetypes.MetricConnectorWithExitNodeCount)
// gaugeConnectorAppConnectorResources tracks the number of Connectors currently managed by this operator instance that are app connectors.
gaugeConnectorAppConnectorResources = clientmetric.NewGauge(kubetypes.MetricConnectorWithAppConnectorCount)
)
func (a *ConnectorReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
@@ -108,13 +114,12 @@ func (a *ConnectorReconciler) Reconcile(ctx context.Context, req reconcile.Reque
oldCnStatus := cn.Status.DeepCopy()
setStatus := func(cn *tsapi.Connector, _ tsapi.ConditionType, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetConnectorCondition(cn, tsapi.ConnectorReady, status, reason, message, cn.Generation, a.clock, logger)
if !apiequality.Semantic.DeepEqual(oldCnStatus, cn.Status) {
var updateErr error
if !apiequality.Semantic.DeepEqual(oldCnStatus, &cn.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := a.Client.Status().Update(ctx, cn); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
}
updateErr = a.Client.Status().Update(ctx, cn)
}
return res, err
return res, errors.Join(err, updateErr)
}
if !slices.Contains(cn.Finalizers, FinalizerName) {
@@ -131,17 +136,24 @@ func (a *ConnectorReconciler) Reconcile(ctx context.Context, req reconcile.Reque
}
if err := a.validate(cn); err != nil {
logger.Errorf("error validating Connector spec: %w", err)
message := fmt.Sprintf(messageConnectorInvalid, err)
a.recorder.Eventf(cn, corev1.EventTypeWarning, reasonConnectorInvalid, message)
return setStatus(cn, tsapi.ConnectorReady, metav1.ConditionFalse, reasonConnectorInvalid, message)
}
if err = a.maybeProvisionConnector(ctx, logger, cn); err != nil {
logger.Errorf("error creating Connector resources: %w", err)
reason := reasonConnectorCreationFailed
message := fmt.Sprintf(messageConnectorCreationFailed, err)
a.recorder.Eventf(cn, corev1.EventTypeWarning, reasonConnectorCreationFailed, message)
return setStatus(cn, tsapi.ConnectorReady, metav1.ConditionFalse, reasonConnectorCreationFailed, message)
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
reason = reasonConnectorCreating
message = fmt.Sprintf("optimistic lock error, retrying: %s", err)
err = nil
logger.Info(message)
} else {
a.recorder.Eventf(cn, corev1.EventTypeWarning, reason, message)
}
return setStatus(cn, tsapi.ConnectorReady, metav1.ConditionFalse, reason, message)
}
logger.Info("Connector resources synced")
@@ -150,6 +162,9 @@ func (a *ConnectorReconciler) Reconcile(ctx context.Context, req reconcile.Reque
cn.Status.SubnetRoutes = cn.Spec.SubnetRouter.AdvertiseRoutes.Stringify()
return setStatus(cn, tsapi.ConnectorReady, metav1.ConditionTrue, reasonConnectorCreated, reasonConnectorCreated)
}
if cn.Spec.AppConnector != nil {
cn.Status.IsAppConnector = true
}
cn.Status.SubnetRoutes = ""
return setStatus(cn, tsapi.ConnectorReady, metav1.ConditionTrue, reasonConnectorCreated, reasonConnectorCreated)
}
@@ -183,29 +198,44 @@ func (a *ConnectorReconciler) maybeProvisionConnector(ctx context.Context, logge
isExitNode: cn.Spec.ExitNode,
},
ProxyClassName: proxyClass,
proxyType: proxyTypeConnector,
}
if cn.Spec.SubnetRouter != nil && len(cn.Spec.SubnetRouter.AdvertiseRoutes) > 0 {
sts.Connector.routes = cn.Spec.SubnetRouter.AdvertiseRoutes.Stringify()
}
if cn.Spec.AppConnector != nil {
sts.Connector.isAppConnector = true
if len(cn.Spec.AppConnector.Routes) != 0 {
sts.Connector.routes = cn.Spec.AppConnector.Routes.Stringify()
}
}
a.mu.Lock()
if sts.Connector.isExitNode {
if cn.Spec.ExitNode {
a.exitNodes.Add(cn.UID)
} else {
a.exitNodes.Remove(cn.UID)
}
if sts.Connector.routes != "" {
if cn.Spec.SubnetRouter != nil {
a.subnetRouters.Add(cn.GetUID())
} else {
a.subnetRouters.Remove(cn.GetUID())
}
if cn.Spec.AppConnector != nil {
a.appConnectors.Add(cn.GetUID())
} else {
a.appConnectors.Remove(cn.GetUID())
}
a.mu.Unlock()
gaugeConnectorSubnetRouterResources.Set(int64(a.subnetRouters.Len()))
gaugeConnectorExitNodeResources.Set(int64(a.exitNodes.Len()))
gaugeConnectorAppConnectorResources.Set(int64(a.appConnectors.Len()))
var connectors set.Slice[types.UID]
connectors.AddSlice(a.exitNodes.Slice())
connectors.AddSlice(a.subnetRouters.Slice())
connectors.AddSlice(a.appConnectors.Slice())
gaugeConnectorResources.Set(int64(connectors.Len()))
_, err := a.ssr.Provision(ctx, logger, sts)
@@ -213,27 +243,27 @@ func (a *ConnectorReconciler) maybeProvisionConnector(ctx context.Context, logge
return err
}
_, tsHost, ips, err := a.ssr.DeviceInfo(ctx, crl)
dev, err := a.ssr.DeviceInfo(ctx, crl, logger)
if err != nil {
return err
}
if tsHost == "" {
logger.Debugf("no Tailscale hostname known yet, waiting for connector pod to finish auth")
if dev == nil || dev.hostname == "" {
logger.Debugf("no Tailscale hostname known yet, waiting for Connector Pod to finish auth")
// No hostname yet. Wait for the connector pod to auth.
cn.Status.TailnetIPs = nil
cn.Status.Hostname = ""
return nil
}
cn.Status.TailnetIPs = ips
cn.Status.Hostname = tsHost
cn.Status.TailnetIPs = dev.ips
cn.Status.Hostname = dev.hostname
return nil
}
func (a *ConnectorReconciler) maybeCleanupConnector(ctx context.Context, logger *zap.SugaredLogger, cn *tsapi.Connector) (bool, error) {
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(cn.Name, a.tsnamespace, "connector")); err != nil {
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(cn.Name, a.tsnamespace, "connector"), proxyTypeConnector); err != nil {
return false, fmt.Errorf("failed to cleanup Connector resources: %w", err)
} else if !done {
logger.Debugf("Connector cleanup not done yet, waiting for next reconcile")
@@ -248,12 +278,15 @@ func (a *ConnectorReconciler) maybeCleanupConnector(ctx context.Context, logger
a.mu.Lock()
a.subnetRouters.Remove(cn.UID)
a.exitNodes.Remove(cn.UID)
a.appConnectors.Remove(cn.UID)
a.mu.Unlock()
gaugeConnectorExitNodeResources.Set(int64(a.exitNodes.Len()))
gaugeConnectorSubnetRouterResources.Set(int64(a.subnetRouters.Len()))
gaugeConnectorAppConnectorResources.Set(int64(a.appConnectors.Len()))
var connectors set.Slice[types.UID]
connectors.AddSlice(a.exitNodes.Slice())
connectors.AddSlice(a.subnetRouters.Slice())
connectors.AddSlice(a.appConnectors.Slice())
gaugeConnectorResources.Set(int64(connectors.Len()))
return true, nil
}
@@ -262,8 +295,14 @@ func (a *ConnectorReconciler) validate(cn *tsapi.Connector) error {
// Connector fields are already validated at apply time with CEL validation
// on custom resource fields. The checks here are a backup in case the
// CEL validation breaks without us noticing.
if !(cn.Spec.SubnetRouter != nil || cn.Spec.ExitNode) {
return errors.New("invalid spec: a Connector must expose subnet routes or act as an exit node (or both)")
if cn.Spec.SubnetRouter == nil && !cn.Spec.ExitNode && cn.Spec.AppConnector == nil {
return errors.New("invalid spec: a Connector must be configured as at least one of subnet router, exit node or app connector")
}
if (cn.Spec.SubnetRouter != nil || cn.Spec.ExitNode) && cn.Spec.AppConnector != nil {
return errors.New("invalid spec: a Connector that is configured as an app connector must not be also configured as a subnet router or exit node")
}
if cn.Spec.AppConnector != nil {
return validateAppConnector(cn.Spec.AppConnector)
}
if cn.Spec.SubnetRouter == nil {
return nil
@@ -272,19 +311,27 @@ func (a *ConnectorReconciler) validate(cn *tsapi.Connector) error {
}
func validateSubnetRouter(sb *tsapi.SubnetRouter) error {
if len(sb.AdvertiseRoutes) < 1 {
if len(sb.AdvertiseRoutes) == 0 {
return errors.New("invalid subnet router spec: no routes defined")
}
var err error
for _, route := range sb.AdvertiseRoutes {
return validateRoutes(sb.AdvertiseRoutes)
}
func validateAppConnector(ac *tsapi.AppConnector) error {
return validateRoutes(ac.Routes)
}
func validateRoutes(routes tsapi.Routes) error {
var errs []error
for _, route := range routes {
pfx, e := netip.ParsePrefix(string(route))
if e != nil {
err = errors.Wrap(err, fmt.Sprintf("route %s is invalid: %v", route, err))
errs = append(errs, fmt.Errorf("route %v is invalid: %v", route, e))
continue
}
if pfx.Masked() != pfx {
err = errors.Wrap(err, fmt.Sprintf("route %s has non-address bits set; expected %s", pfx, pfx.Masked()))
errs = append(errs, fmt.Errorf("route %s has non-address bits set; expected %s", pfx, pfx.Masked()))
}
}
return err
return errors.Join(errs...)
}

View File

@@ -8,12 +8,14 @@ package main
import (
"context"
"testing"
"time"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
@@ -201,7 +203,7 @@ func TestConnectorWithProxyClass(t *testing.T) {
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "custom-metadata"},
Spec: tsapi.ProxyClassSpec{StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Labels: tsapi.Labels{"foo": "bar"},
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
@@ -278,7 +280,7 @@ func TestConnectorWithProxyClass(t *testing.T) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Status: metav1.ConditionTrue,
Type: string(tsapi.ProxyClassready),
Type: string(tsapi.ProxyClassReady),
ObservedGeneration: pc.Generation,
}}}
})
@@ -296,3 +298,100 @@ func TestConnectorWithProxyClass(t *testing.T) {
expectReconciled(t, cr, "", "test")
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}
func TestConnectorWithAppConnector(t *testing.T) {
// Setup
cn := &tsapi.Connector{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
UID: types.UID("1234-UID"),
},
TypeMeta: metav1.TypeMeta{
Kind: tsapi.ConnectorKind,
APIVersion: "tailscale.io/v1alpha1",
},
Spec: tsapi.ConnectorSpec{
AppConnector: &tsapi.AppConnector{},
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(cn).
WithStatusSubresource(cn).
Build()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
cl := tstest.NewClock(tstest.ClockOpts{})
fr := record.NewFakeRecorder(1)
cr := &ConnectorReconciler{
Client: fc,
clock: cl,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
recorder: fr,
}
// 1. Connector with app connnector is created and becomes ready
expectReconciled(t, cr, "", "test")
fullName, shortName := findGenName(t, fc, "", "test", "connector")
opts := configOpts{
stsName: shortName,
secretName: fullName,
parentType: "connector",
hostname: "test-connector",
app: kubetypes.AppConnector,
isAppConnector: true,
}
expectEqual(t, fc, expectedSecret(t, fc, opts), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// Connector's ready condition should be set to true
cn.ObjectMeta.Finalizers = append(cn.ObjectMeta.Finalizers, "tailscale.com/finalizer")
cn.Status.IsAppConnector = true
cn.Status.Conditions = []metav1.Condition{{
Type: string(tsapi.ConnectorReady),
Status: metav1.ConditionTrue,
LastTransitionTime: metav1.Time{Time: cl.Now().Truncate(time.Second)},
Reason: reasonConnectorCreated,
Message: reasonConnectorCreated,
}}
expectEqual(t, fc, cn, nil)
// 2. Connector with invalid app connector routes has status set to invalid
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.AppConnector.Routes = tsapi.Routes{tsapi.Route("1.2.3.4/5")}
})
cn.Spec.AppConnector.Routes = tsapi.Routes{tsapi.Route("1.2.3.4/5")}
expectReconciled(t, cr, "", "test")
cn.Status.Conditions = []metav1.Condition{{
Type: string(tsapi.ConnectorReady),
Status: metav1.ConditionFalse,
LastTransitionTime: metav1.Time{Time: cl.Now().Truncate(time.Second)},
Reason: reasonConnectorInvalid,
Message: "Connector is invalid: route 1.2.3.4/5 has non-address bits set; expected 0.0.0.0/5",
}}
expectEqual(t, fc, cn, nil)
// 3. Connector with valid app connnector routes becomes ready
mustUpdate[tsapi.Connector](t, fc, "", "test", func(conn *tsapi.Connector) {
conn.Spec.AppConnector.Routes = tsapi.Routes{tsapi.Route("10.88.2.21/32")}
})
cn.Spec.AppConnector.Routes = tsapi.Routes{tsapi.Route("10.88.2.21/32")}
cn.Status.Conditions = []metav1.Condition{{
Type: string(tsapi.ConnectorReady),
Status: metav1.ConditionTrue,
LastTransitionTime: metav1.Time{Time: cl.Now().Truncate(time.Second)},
Reason: reasonConnectorCreated,
Message: reasonConnectorCreated,
}}
expectReconciled(t, cr, "", "test")
}

View File

@@ -80,10 +80,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/beorn7/perks/quantile from github.com/prometheus/client_golang/prometheus
github.com/bits-and-blooms/bitset from github.com/gaissmai/bart
💣 github.com/cespare/xxhash/v2 from github.com/prometheus/client_golang/prometheus
github.com/coder/websocket from tailscale.com/control/controlhttp+
github.com/coder/websocket/internal/errd from github.com/coder/websocket
github.com/coder/websocket/internal/util from github.com/coder/websocket
github.com/coder/websocket/internal/xsync from github.com/coder/websocket
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
💣 github.com/davecgh/go-spew/spew from k8s.io/apimachinery/pkg/util/dump
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/com+
@@ -229,7 +225,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/tailscale/wireguard-go/rwcancel from github.com/tailscale/wireguard-go/device+
github.com/tailscale/wireguard-go/tai64n from github.com/tailscale/wireguard-go/device
💣 github.com/tailscale/wireguard-go/tun from github.com/tailscale/wireguard-go/device+
github.com/tcnksm/go-httpstat from tailscale.com/net/netcheck
L github.com/u-root/uio/rand from github.com/insomniacslk/dhcp/dhcpv4
L github.com/u-root/uio/uio from github.com/insomniacslk/dhcp/dhcpv4+
L github.com/vishvananda/netns from github.com/tailscale/netlink+
@@ -310,7 +305,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
gvisor.dev/gvisor/pkg/tcpip/network/internal/ip from gvisor.dev/gvisor/pkg/tcpip/network/ipv4+
gvisor.dev/gvisor/pkg/tcpip/network/internal/multicast from gvisor.dev/gvisor/pkg/tcpip/network/ipv4+
gvisor.dev/gvisor/pkg/tcpip/network/ipv4 from tailscale.com/net/tstun+
gvisor.dev/gvisor/pkg/tcpip/network/ipv6 from tailscale.com/wgengine/netstack
gvisor.dev/gvisor/pkg/tcpip/network/ipv6 from tailscale.com/wgengine/netstack+
gvisor.dev/gvisor/pkg/tcpip/ports from gvisor.dev/gvisor/pkg/tcpip/stack+
gvisor.dev/gvisor/pkg/tcpip/seqnum from gvisor.dev/gvisor/pkg/tcpip/header+
💣 gvisor.dev/gvisor/pkg/tcpip/stack from gvisor.dev/gvisor/pkg/tcpip/adapters/gonet+
@@ -382,7 +377,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/api/storage/v1beta1 from k8s.io/client-go/applyconfigurations/storage/v1beta1+
k8s.io/api/storagemigration/v1alpha1 from k8s.io/client-go/applyconfigurations/storagemigration/v1alpha1+
k8s.io/apiextensions-apiserver/pkg/apis/apiextensions from k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1
💣 k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1 from sigs.k8s.io/controller-runtime/pkg/webhook/conversion
💣 k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1 from sigs.k8s.io/controller-runtime/pkg/webhook/conversion+
k8s.io/apimachinery/pkg/api/equality from k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1+
k8s.io/apimachinery/pkg/api/errors from k8s.io/apimachinery/pkg/util/managedfields/internal+
k8s.io/apimachinery/pkg/api/meta from k8s.io/apimachinery/pkg/api/validation+
@@ -654,10 +649,11 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/client/tailscale/apitype from tailscale.com/client/tailscale+
tailscale.com/client/web from tailscale.com/ipn/ipnlocal
tailscale.com/clientupdate from tailscale.com/client/web+
tailscale.com/clientupdate/distsign from tailscale.com/clientupdate
LW tailscale.com/clientupdate/distsign from tailscale.com/clientupdate
tailscale.com/control/controlbase from tailscale.com/control/controlhttp+
tailscale.com/control/controlclient from tailscale.com/ipn/ipnlocal+
tailscale.com/control/controlhttp from tailscale.com/control/controlclient
tailscale.com/control/controlhttp/controlhttpcommon from tailscale.com/control/controlhttp
tailscale.com/control/controlknobs from tailscale.com/control/controlclient+
tailscale.com/derp from tailscale.com/derp/derphttp+
tailscale.com/derp/derphttp from tailscale.com/ipn/localapi+
@@ -668,6 +664,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/doctor/routetable from tailscale.com/ipn/ipnlocal
tailscale.com/drive from tailscale.com/client/tailscale+
tailscale.com/envknob from tailscale.com/client/tailscale+
tailscale.com/envknob/featureknob from tailscale.com/client/web+
tailscale.com/health from tailscale.com/control/controlclient+
tailscale.com/health/healthmsg from tailscale.com/ipn/ipnlocal
tailscale.com/hostinfo from tailscale.com/client/web+
@@ -690,6 +687,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/k8s-operator/sessionrecording/spdy from tailscale.com/k8s-operator/sessionrecording
tailscale.com/k8s-operator/sessionrecording/tsrecorder from tailscale.com/k8s-operator/sessionrecording+
tailscale.com/k8s-operator/sessionrecording/ws from tailscale.com/k8s-operator/sessionrecording
tailscale.com/kube/egressservices from tailscale.com/cmd/k8s-operator
tailscale.com/kube/kubeapi from tailscale.com/ipn/store/kubestore+
tailscale.com/kube/kubeclient from tailscale.com/ipn/store/kubestore
tailscale.com/kube/kubetypes from tailscale.com/cmd/k8s-operator+
@@ -733,11 +731,11 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/net/stun from tailscale.com/ipn/localapi+
L tailscale.com/net/tcpinfo from tailscale.com/derp
tailscale.com/net/tlsdial from tailscale.com/control/controlclient+
tailscale.com/net/tlsdial/blockblame from tailscale.com/net/tlsdial
tailscale.com/net/tsaddr from tailscale.com/client/web+
tailscale.com/net/tsdial from tailscale.com/control/controlclient+
💣 tailscale.com/net/tshttpproxy from tailscale.com/clientupdate/distsign+
tailscale.com/net/tstun from tailscale.com/tsd+
tailscale.com/net/wsconn from tailscale.com/control/controlhttp+
tailscale.com/omit from tailscale.com/ipn/conffile
tailscale.com/paths from tailscale.com/client/tailscale+
💣 tailscale.com/portlist from tailscale.com/ipn/ipnlocal
@@ -772,6 +770,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/types/persist from tailscale.com/control/controlclient+
tailscale.com/types/preftype from tailscale.com/ipn+
tailscale.com/types/ptr from tailscale.com/cmd/k8s-operator+
tailscale.com/types/result from tailscale.com/util/lineiter
tailscale.com/types/structs from tailscale.com/control/controlclient+
tailscale.com/types/tkatype from tailscale.com/client/tailscale+
tailscale.com/types/views from tailscale.com/appc+
@@ -789,7 +788,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
💣 tailscale.com/util/hashx from tailscale.com/util/deephash
tailscale.com/util/httphdr from tailscale.com/ipn/ipnlocal+
tailscale.com/util/httpm from tailscale.com/client/tailscale+
tailscale.com/util/lineread from tailscale.com/hostinfo+
tailscale.com/util/lineiter from tailscale.com/hostinfo+
L tailscale.com/util/linuxfw from tailscale.com/net/netns+
tailscale.com/util/mak from tailscale.com/appc+
tailscale.com/util/multierr from tailscale.com/control/controlclient+
@@ -808,8 +807,12 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/util/singleflight from tailscale.com/control/controlclient+
tailscale.com/util/slicesx from tailscale.com/appc+
tailscale.com/util/syspolicy from tailscale.com/control/controlclient+
tailscale.com/util/syspolicy/internal from tailscale.com/util/syspolicy/setting
tailscale.com/util/syspolicy/setting from tailscale.com/util/syspolicy
tailscale.com/util/syspolicy/internal from tailscale.com/util/syspolicy/setting+
tailscale.com/util/syspolicy/internal/loggerx from tailscale.com/util/syspolicy/internal/metrics+
tailscale.com/util/syspolicy/internal/metrics from tailscale.com/util/syspolicy/source
tailscale.com/util/syspolicy/rsop from tailscale.com/util/syspolicy+
tailscale.com/util/syspolicy/setting from tailscale.com/util/syspolicy+
tailscale.com/util/syspolicy/source from tailscale.com/util/syspolicy+
tailscale.com/util/sysresources from tailscale.com/wgengine/magicsock
tailscale.com/util/systemd from tailscale.com/control/controlclient+
tailscale.com/util/testenv from tailscale.com/control/controlclient+
@@ -819,7 +822,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/util/vizerror from tailscale.com/tailcfg+
💣 tailscale.com/util/winutil from tailscale.com/clientupdate+
W 💣 tailscale.com/util/winutil/authenticode from tailscale.com/clientupdate+
W 💣 tailscale.com/util/winutil/gp from tailscale.com/net/dns
W 💣 tailscale.com/util/winutil/gp from tailscale.com/net/dns+
W tailscale.com/util/winutil/policy from tailscale.com/ipn/ipnlocal
W 💣 tailscale.com/util/winutil/winenv from tailscale.com/hostinfo+
tailscale.com/util/zstdframe from tailscale.com/control/controlclient+

View File

@@ -35,9 +35,13 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: oauth
secret:
secretName: operator-oauth
- name: oauth
{{- with .Values.oauthSecretVolume }}
{{- toYaml . | nindent 10 }}
{{- else }}
secret:
secretName: operator-oauth
{{- end }}
containers:
- name: operator
{{- with .Values.operatorConfig.securityContext }}
@@ -81,6 +85,14 @@ spec:
- name: PROXY_DEFAULT_CLASS
value: {{ .Values.proxyConfig.defaultProxyClass }}
{{- end }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
{{- with .Values.operatorConfig.extraEnv }}
{{- toYaml . | nindent 12 }}
{{- end }}

View File

@@ -1,3 +1,4 @@
{{- if .Values.ingressClass.enabled }}
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
@@ -6,3 +7,4 @@ metadata:
spec:
controller: tailscale.com/ts-ingress # controller name currently can not be changed
# parameters: {} # currently no parameters are supported
{{- end }}

View File

@@ -6,6 +6,10 @@ kind: ServiceAccount
metadata:
name: operator
namespace: {{ .Release.Namespace }}
{{- with .Values.operatorConfig.serviceAccountAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@@ -22,7 +26,7 @@ rules:
resources: ["ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["tailscale.com"]
resources: ["connectors", "connectors/status", "proxyclasses", "proxyclasses/status"]
resources: ["connectors", "connectors/status", "proxyclasses", "proxyclasses/status", "proxygroups", "proxygroups/status"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["tailscale.com"]
resources: ["dnsconfigs", "dnsconfigs/status"]
@@ -30,6 +34,10 @@ rules:
- apiGroups: ["tailscale.com"]
resources: ["recorders", "recorders/status"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch"]
resourceNames: ["servicemonitors.monitoring.coreos.com"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@@ -53,15 +61,21 @@ rules:
- apiGroups: [""]
resources: ["secrets", "serviceaccounts", "configmaps"]
verbs: ["create","delete","deletecollection","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","list","watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments"]
verbs: ["create","delete","deletecollection","get","list","patch","update","watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
verbs: ["get", "list", "watch", "create", "update", "deletecollection"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "create", "patch", "update", "list", "watch"]
- apiGroups: ["monitoring.coreos.com"]
resources: ["servicemonitors"]
verbs: ["get", "list", "update", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -16,6 +16,9 @@ rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create","delete","deletecollection","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -3,11 +3,26 @@
# Operator oauth credentials. If set a Kubernetes Secret with the provided
# values will be created in the operator namespace. If unset a Secret named
# operator-oauth must be precreated.
# operator-oauth must be precreated or oauthSecretVolume needs to be adjusted.
# This block will be overridden by oauthSecretVolume, if set.
oauth: {}
# clientId: ""
# clientSecret: ""
# Secret volume.
# If set it defines the volume the oauth secrets will be mounted from.
# The volume needs to contain two files named `client_id` and `client_secret`.
# If unset the volume will reference the Secret named operator-oauth.
# This block will override the oauth block.
oauthSecretVolume: {}
# csi:
# driver: secrets-store.csi.k8s.io
# readOnly: true
# volumeAttributes:
# secretProviderClass: tailscale-oauth
#
## NAME is pre-defined!
# installCRDs determines whether tailscale.com CRDs should be installed as part
# of chart installation. We do not use Helm's CRD installation mechanism as that
# does not allow for upgrading CRDs.
@@ -40,6 +55,9 @@ operatorConfig:
podAnnotations: {}
podLabels: {}
serviceAccountAnnotations: {}
# eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/tailscale-operator-role
tolerations: []
affinity: {}
@@ -54,6 +72,9 @@ operatorConfig:
# - name: EXTRA_VAR2
# value: "value2"
# In the case that you already have a tailscale ingressclass in your cluster (or vcluster), you can disable the creation here
ingressClass:
enabled: true
# proxyConfig contains configuraton that will be applied to any ingress/egress
# proxies created by the operator.
@@ -79,7 +100,8 @@ proxyConfig:
defaultTags: "tag:k8s"
firewallMode: auto
# If defined, this proxy class will be used as the default proxy class for
# service and ingress resources that do not have a proxy class defined.
# service and ingress resources that do not have a proxy class defined. It
# does not apply to Connector resources.
defaultProxyClass: ""
# apiServerProxyConfig allows to configure whether the operator should expose

View File

@@ -24,6 +24,10 @@ spec:
jsonPath: .status.isExitNode
name: IsExitNode
type: string
- description: Whether this Connector instance is an app connector.
jsonPath: .status.isAppConnector
name: IsAppConnector
type: string
- description: Status of the deployed Connector resources.
jsonPath: .status.conditions[?(@.type == "ConnectorReady")].reason
name: Status
@@ -66,10 +70,40 @@ spec:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
type: object
properties:
appConnector:
description: |-
AppConnector defines whether the Connector device should act as a Tailscale app connector. A Connector that is
configured as an app connector cannot be a subnet router or an exit node. If this field is unset, the
Connector does not act as an app connector.
Note that you will need to manually configure the permissions and the domains for the app connector via the
Admin panel.
Note also that the main tested and supported use case of this config option is to deploy an app connector on
Kubernetes to access SaaS applications available on the public internet. Using the app connector to expose
cluster workloads or other internal workloads to tailnet might work, but this is not a use case that we have
tested or optimised for.
If you are using the app connector to access SaaS applications because you need a predictable egress IP that
can be whitelisted, it is also your responsibility to ensure that cluster traffic from the connector flows
via that predictable IP, for example by enforcing that cluster egress traffic is routed via an egress NAT
device with a static IP address.
https://tailscale.com/kb/1281/app-connectors
type: object
properties:
routes:
description: |-
Routes are optional preconfigured routes for the domains routed via the app connector.
If not set, routes for the domains will be discovered dynamically.
If set, the app connector will immediately be able to route traffic using the preconfigured routes, but may
also dynamically discover other routes.
https://tailscale.com/kb/1332/apps-best-practices#preconfiguration
type: array
minItems: 1
items:
type: string
format: cidr
exitNode:
description: |-
ExitNode defines whether the Connector node should act as a
Tailscale exit node. Defaults to false.
ExitNode defines whether the Connector device should act as a Tailscale exit node. Defaults to false.
This field is mutually exclusive with the appConnector field.
https://tailscale.com/kb/1103/exit-nodes
type: boolean
hostname:
@@ -90,9 +124,11 @@ spec:
type: string
subnetRouter:
description: |-
SubnetRouter defines subnet routes that the Connector node should
expose to tailnet. If unset, none are exposed.
SubnetRouter defines subnet routes that the Connector device should
expose to tailnet as a Tailscale subnet router.
https://tailscale.com/kb/1019/subnets/
If this field is unset, the device does not get configured as a Tailscale subnet router.
This field is mutually exclusive with the appConnector field.
type: object
required:
- advertiseRoutes
@@ -125,8 +161,10 @@ spec:
type: string
pattern: ^tag:[a-zA-Z][a-zA-Z0-9-]*$
x-kubernetes-validations:
- rule: has(self.subnetRouter) || self.exitNode == true
message: A Connector needs to be either an exit node or a subnet router, or both.
- rule: has(self.subnetRouter) || (has(self.exitNode) && self.exitNode == true) || has(self.appConnector)
message: A Connector needs to have at least one of exit node, subnet router or app connector configured.
- rule: '!((has(self.subnetRouter) || (has(self.exitNode) && self.exitNode == true)) && has(self.appConnector))'
message: The appConnector field is mutually exclusive with exitNode and subnetRouter fields.
status:
description: |-
ConnectorStatus describes the status of the Connector. This is set
@@ -200,6 +238,9 @@ spec:
If MagicDNS is enabled in your tailnet, it is the MagicDNS name of the
node.
type: string
isAppConnector:
description: IsAppConnector is set to true if the Connector acts as an app connector.
type: boolean
isExitNode:
description: IsExitNode is set to true if the Connector acts as an exit node.
type: boolean

View File

@@ -73,9 +73,45 @@ spec:
enable:
description: |-
Setting enable to true will make the proxy serve Tailscale metrics
at <pod-ip>:9001/debug/metrics.
at <pod-ip>:9002/metrics.
A metrics Service named <proxy-statefulset>-metrics will also be created in the operator's namespace and will
serve the metrics at <service-ip>:9002/metrics.
In 1.78.x and 1.80.x, this field also serves as the default value for
.spec.statefulSet.pod.tailscaleContainer.debug.enable. From 1.82.0, both
fields will independently default to false.
Defaults to false.
type: boolean
serviceMonitor:
description: |-
Enable to create a Prometheus ServiceMonitor for scraping the proxy's Tailscale metrics.
The ServiceMonitor will select the metrics Service that gets created when metrics are enabled.
The ingested metrics for each Service monitor will have labels to identify the proxy:
ts_proxy_type: ingress_service|ingress_resource|connector|proxygroup
ts_proxy_parent_name: name of the parent resource (i.e name of the Connector, Tailscale Ingress, Tailscale Service or ProxyGroup)
ts_proxy_parent_namespace: namespace of the parent resource (if the parent resource is not cluster scoped)
job: ts_<proxy type>_[<parent namespace>]_<parent_name>
type: object
required:
- enable
properties:
enable:
description: If Enable is set to true, a Prometheus ServiceMonitor will be created. Enable can only be set to true if metrics are enabled.
type: boolean
labels:
description: |-
Labels to add to the ServiceMonitor.
Labels must be valid Kubernetes labels.
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
type: object
additionalProperties:
type: string
maxLength: 63
pattern: ^(([a-zA-Z0-9][-._a-zA-Z0-9]*)?[a-zA-Z0-9])?$
x-kubernetes-validations:
- rule: '!(has(self.serviceMonitor) && self.serviceMonitor.enable && !self.enable)'
message: ServiceMonitor can only be enabled if metrics are enabled
statefulSet:
description: |-
Configuration parameters for the proxy's StatefulSet. Tailscale
@@ -107,6 +143,8 @@ spec:
type: object
additionalProperties:
type: string
maxLength: 63
pattern: ^(([a-zA-Z0-9][-._a-zA-Z0-9]*)?[a-zA-Z0-9])?$
pod:
description: Configuration for the proxy Pod.
type: object
@@ -1036,6 +1074,8 @@ spec:
type: object
additionalProperties:
type: string
maxLength: 63
pattern: ^(([a-zA-Z0-9][-._a-zA-Z0-9]*)?[a-zA-Z0-9])?$
nodeName:
description: |-
Proxy Pod's node name.
@@ -1249,6 +1289,25 @@ spec:
description: Configuration for the proxy container running tailscale.
type: object
properties:
debug:
description: |-
Configuration for enabling extra debug information in the container.
Not recommended for production use.
type: object
properties:
enable:
description: |-
Enable tailscaled's HTTP pprof endpoints at <pod-ip>:9001/debug/pprof/
and internal debug metrics endpoint at <pod-ip>:9001/debug/metrics, where
9001 is a container port named "debug". The endpoints and their responses
may change in backwards incompatible ways in the future, and should not
be considered stable.
In 1.78.x and 1.80.x, this setting will default to the value of
.spec.metrics.enable, and requests to the "metrics" port matching the
mux pattern /debug/ will be forwarded to the "debug" port. In 1.82.x,
this setting will default to false, and no requests will be proxied.
type: boolean
env:
description: |-
List of environment variables to set in the container.
@@ -1360,11 +1419,12 @@ spec:
securityContext:
description: |-
Container security context.
Security context specified here will override the security context by the operator.
By default the operator:
- sets 'privileged: true' for the init container
- set NET_ADMIN capability for tailscale container for proxies that
are created for Services or Connector.
Security context specified here will override the security context set by the operator.
By default the operator sets the Tailscale container and the Tailscale init container to privileged
for proxies created for Tailscale ingress and egress Service, Connector and ProxyGroup.
You can reduce the permissions of the Tailscale container to cap NET_ADMIN by
installing device plugin in your cluster and configuring the proxies tun device to be created
by the device plugin, see https://github.com/tailscale/tailscale/issues/10814#issuecomment-2479977752
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
type: object
properties:
@@ -1553,6 +1613,25 @@ spec:
description: Configuration for the proxy init container that enables forwarding.
type: object
properties:
debug:
description: |-
Configuration for enabling extra debug information in the container.
Not recommended for production use.
type: object
properties:
enable:
description: |-
Enable tailscaled's HTTP pprof endpoints at <pod-ip>:9001/debug/pprof/
and internal debug metrics endpoint at <pod-ip>:9001/debug/metrics, where
9001 is a container port named "debug". The endpoints and their responses
may change in backwards incompatible ways in the future, and should not
be considered stable.
In 1.78.x and 1.80.x, this setting will default to the value of
.spec.metrics.enable, and requests to the "metrics" port matching the
mux pattern /debug/ will be forwarded to the "debug" port. In 1.82.x,
this setting will default to false, and no requests will be proxied.
type: boolean
env:
description: |-
List of environment variables to set in the container.
@@ -1664,11 +1743,12 @@ spec:
securityContext:
description: |-
Container security context.
Security context specified here will override the security context by the operator.
By default the operator:
- sets 'privileged: true' for the init container
- set NET_ADMIN capability for tailscale container for proxies that
are created for Services or Connector.
Security context specified here will override the security context set by the operator.
By default the operator sets the Tailscale container and the Tailscale init container to privileged
for proxies created for Tailscale ingress and egress Service, Connector and ProxyGroup.
You can reduce the permissions of the Tailscale container to cap NET_ADMIN by
installing device plugin in your cluster and configuring the proxies tun device to be created
by the device plugin, see https://github.com/tailscale/tailscale/issues/10814#issuecomment-2479977752
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
type: object
properties:
@@ -1896,6 +1976,182 @@ spec:
Value is the taint value the toleration matches to.
If the operator is Exists, the value should be empty, otherwise just a regular string.
type: string
topologySpreadConstraints:
description: |-
Proxy Pod's topology spread constraints.
By default Tailscale Kubernetes operator does not apply any topology spread constraints.
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
type: array
items:
description: TopologySpreadConstraint specifies how to spread matching pods among the given topology.
type: object
required:
- maxSkew
- topologyKey
- whenUnsatisfiable
properties:
labelSelector:
description: |-
LabelSelector is used to find matching pods.
Pods that match this label selector are counted to determine the number of pods
in their corresponding topology domain.
type: object
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
type: array
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
type: object
required:
- key
- operator
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
type: array
items:
type: string
x-kubernetes-list-type: atomic
x-kubernetes-list-type: atomic
matchLabels:
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
additionalProperties:
type: string
x-kubernetes-map-type: atomic
matchLabelKeys:
description: |-
MatchLabelKeys is a set of pod label keys to select the pods over which
spreading will be calculated. The keys are used to lookup values from the
incoming pod labels, those key-value labels are ANDed with labelSelector
to select the group of existing pods over which spreading will be calculated
for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.
MatchLabelKeys cannot be set when LabelSelector isn't set.
Keys that don't exist in the incoming pod labels will
be ignored. A null or empty list means only match against labelSelector.
This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).
type: array
items:
type: string
x-kubernetes-list-type: atomic
maxSkew:
description: |-
MaxSkew describes the degree to which pods may be unevenly distributed.
When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
between the number of matching pods in the target topology and the global minimum.
The global minimum is the minimum number of matching pods in an eligible domain
or zero if the number of eligible domains is less than MinDomains.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 2/2/1:
In this case, the global minimum is 1.
| zone1 | zone2 | zone3 |
| P P | P P | P |
- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
violate MaxSkew(1).
- if MaxSkew is 2, incoming pod can be scheduled onto any zone.
When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
to topologies that satisfy it.
It's a required field. Default value is 1 and 0 is not allowed.
type: integer
format: int32
minDomains:
description: |-
MinDomains indicates a minimum number of eligible domains.
When the number of eligible domains with matching topology keys is less than minDomains,
Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
And when the number of eligible domains with matching topology keys equals or greater than minDomains,
this value has no effect on scheduling.
As a result, when the number of eligible domains is less than minDomains,
scheduler won't schedule more than maxSkew Pods to those domains.
If value is nil, the constraint behaves as if MinDomains is equal to 1.
Valid values are integers greater than 0.
When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
labelSelector spread as 2/2/2:
| zone1 | zone2 | zone3 |
| P P | P P | P P |
The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
In this situation, new pod with the same labelSelector cannot be scheduled,
because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
it will violate MaxSkew.
type: integer
format: int32
nodeAffinityPolicy:
description: |-
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
when calculating pod topology spread skew. Options are:
- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
nodeTaintsPolicy:
description: |-
NodeTaintsPolicy indicates how we will treat node taints when calculating
pod topology spread skew. Options are:
- Honor: nodes without taints, along with tainted nodes for which the incoming pod
has a toleration, are included.
- Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
topologyKey:
description: |-
TopologyKey is the key of node labels. Nodes that have a label with this key
and identical values are considered to be in the same topology.
We consider each <key, value> as a "bucket", and try to put balanced number
of pods into each bucket.
We define a domain as a particular instance of a topology.
Also, we define an eligible domain as a domain whose nodes meet the requirements of
nodeAffinityPolicy and nodeTaintsPolicy.
e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
It's a required field.
type: string
whenUnsatisfiable:
description: |-
WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
the spread constraint.
- DoNotSchedule (default) tells the scheduler not to schedule it.
- ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered "Unsatisfiable" for an incoming pod
if and only if every possible node assignment for that pod would violate
"MaxSkew" on some topology.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 3/1/1:
| zone1 | zone2 | zone3 |
| P P P | P | P |
If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
won't make it *more* imbalanced.
It's a required field.
type: string
tailscale:
description: |-
TailscaleConfig contains options to configure the tailscale-specific

View File

@@ -0,0 +1,209 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.15.1-0.20240618033008-7824932b0cab
name: proxygroups.tailscale.com
spec:
group: tailscale.com
names:
kind: ProxyGroup
listKind: ProxyGroupList
plural: proxygroups
shortNames:
- pg
singular: proxygroup
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Status of the deployed ProxyGroup resources.
jsonPath: .status.conditions[?(@.type == "ProxyGroupReady")].reason
name: Status
type: string
- description: ProxyGroup type.
jsonPath: .spec.type
name: Type
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: |-
ProxyGroup defines a set of Tailscale devices that will act as proxies.
Currently only egress ProxyGroups are supported.
Use the tailscale.com/proxy-group annotation on a Service to specify that
the egress proxy should be implemented by a ProxyGroup instead of a single
dedicated proxy. In addition to running a highly available set of proxies,
ProxyGroup also allows for serving many annotated Services from a single
set of proxies to minimise resource consumption.
More info: https://tailscale.com/kb/1438/kubernetes-operator-cluster-egress
type: object
required:
- spec
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Spec describes the desired ProxyGroup instances.
type: object
required:
- type
properties:
hostnamePrefix:
description: |-
HostnamePrefix is the hostname prefix to use for tailnet devices created
by the ProxyGroup. Each device will have the integer number from its
StatefulSet pod appended to this prefix to form the full hostname.
HostnamePrefix can contain lower case letters, numbers and dashes, it
must not start with a dash and must be between 1 and 62 characters long.
type: string
pattern: ^[a-z0-9][a-z0-9-]{0,61}$
proxyClass:
description: |-
ProxyClass is the name of the ProxyClass custom resource that contains
configuration options that should be applied to the resources created
for this ProxyGroup. If unset, and there is no default ProxyClass
configured, the operator will create resources with the default
configuration.
type: string
replicas:
description: |-
Replicas specifies how many replicas to create the StatefulSet with.
Defaults to 2.
type: integer
format: int32
minimum: 0
tags:
description: |-
Tags that the Tailscale devices will be tagged with. Defaults to [tag:k8s].
If you specify custom tags here, make sure you also make the operator
an owner of these tags.
See https://tailscale.com/kb/1236/kubernetes-operator/#setting-up-the-kubernetes-operator.
Tags cannot be changed once a ProxyGroup device has been created.
Tag values must be in form ^tag:[a-zA-Z][a-zA-Z0-9-]*$.
type: array
items:
type: string
pattern: ^tag:[a-zA-Z][a-zA-Z0-9-]*$
type:
description: |-
Type of the ProxyGroup proxies. Supported types are egress and ingress.
Type is immutable once a ProxyGroup is created.
type: string
enum:
- egress
- ingress
x-kubernetes-validations:
- rule: self == oldSelf
message: ProxyGroup type is immutable
status:
description: |-
ProxyGroupStatus describes the status of the ProxyGroup resources. This is
set and managed by the Tailscale operator.
type: object
properties:
conditions:
description: |-
List of status conditions to indicate the status of the ProxyGroup
resources. Known condition types are `ProxyGroupReady`.
type: array
items:
description: Condition contains details for one aspect of the current state of this API Resource.
type: object
required:
- lastTransitionTime
- message
- reason
- status
- type
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
type: string
format: date-time
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
type: string
maxLength: 32768
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
type: integer
format: int64
minimum: 0
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
type: string
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
status:
description: status of the condition, one of True, False, Unknown.
type: string
enum:
- "True"
- "False"
- Unknown
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
type: string
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
devices:
description: List of tailnet devices associated with the ProxyGroup StatefulSet.
type: array
items:
type: object
required:
- hostname
properties:
hostname:
description: |-
Hostname is the fully qualified domain name of the device.
If MagicDNS is enabled in your tailnet, it is the MagicDNS name of the
node.
type: string
tailnetIPs:
description: |-
TailnetIPs is the set of tailnet IP addresses (both IPv4 and IPv6)
assigned to the device.
type: array
items:
type: string
x-kubernetes-list-map-keys:
- hostname
x-kubernetes-list-type: map
served: true
storage: true
subresources:
status: {}

View File

@@ -27,6 +27,12 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: |-
Recorder defines a tsrecorder device for recording SSH sessions. By default,
it will store recordings in a local ephemeral volume. If you want to persist
recordings, you can configure an S3-compatible API for storage.
More info: https://tailscale.com/kb/1484/kubernetes-operator-deploying-tsrecorder
type: object
required:
- spec
@@ -1670,7 +1676,7 @@ spec:
- type
x-kubernetes-list-type: map
devices:
description: List of tailnet devices associated with the Recorder statefulset.
description: List of tailnet devices associated with the Recorder StatefulSet.
type: array
items:
type: object

View File

@@ -0,0 +1,7 @@
apiVersion: tailscale.com/v1alpha1
kind: ProxyGroup
metadata:
name: egress-proxies
spec:
type: egress
replicas: 3

View File

@@ -53,6 +53,10 @@ spec:
jsonPath: .status.isExitNode
name: IsExitNode
type: string
- description: Whether this Connector instance is an app connector.
jsonPath: .status.isAppConnector
name: IsAppConnector
type: string
- description: Status of the deployed Connector resources.
jsonPath: .status.conditions[?(@.type == "ConnectorReady")].reason
name: Status
@@ -91,10 +95,40 @@ spec:
More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
properties:
appConnector:
description: |-
AppConnector defines whether the Connector device should act as a Tailscale app connector. A Connector that is
configured as an app connector cannot be a subnet router or an exit node. If this field is unset, the
Connector does not act as an app connector.
Note that you will need to manually configure the permissions and the domains for the app connector via the
Admin panel.
Note also that the main tested and supported use case of this config option is to deploy an app connector on
Kubernetes to access SaaS applications available on the public internet. Using the app connector to expose
cluster workloads or other internal workloads to tailnet might work, but this is not a use case that we have
tested or optimised for.
If you are using the app connector to access SaaS applications because you need a predictable egress IP that
can be whitelisted, it is also your responsibility to ensure that cluster traffic from the connector flows
via that predictable IP, for example by enforcing that cluster egress traffic is routed via an egress NAT
device with a static IP address.
https://tailscale.com/kb/1281/app-connectors
properties:
routes:
description: |-
Routes are optional preconfigured routes for the domains routed via the app connector.
If not set, routes for the domains will be discovered dynamically.
If set, the app connector will immediately be able to route traffic using the preconfigured routes, but may
also dynamically discover other routes.
https://tailscale.com/kb/1332/apps-best-practices#preconfiguration
items:
format: cidr
type: string
minItems: 1
type: array
type: object
exitNode:
description: |-
ExitNode defines whether the Connector node should act as a
Tailscale exit node. Defaults to false.
ExitNode defines whether the Connector device should act as a Tailscale exit node. Defaults to false.
This field is mutually exclusive with the appConnector field.
https://tailscale.com/kb/1103/exit-nodes
type: boolean
hostname:
@@ -115,9 +149,11 @@ spec:
type: string
subnetRouter:
description: |-
SubnetRouter defines subnet routes that the Connector node should
expose to tailnet. If unset, none are exposed.
SubnetRouter defines subnet routes that the Connector device should
expose to tailnet as a Tailscale subnet router.
https://tailscale.com/kb/1019/subnets/
If this field is unset, the device does not get configured as a Tailscale subnet router.
This field is mutually exclusive with the appConnector field.
properties:
advertiseRoutes:
description: |-
@@ -151,8 +187,10 @@ spec:
type: array
type: object
x-kubernetes-validations:
- message: A Connector needs to be either an exit node or a subnet router, or both.
rule: has(self.subnetRouter) || self.exitNode == true
- message: A Connector needs to have at least one of exit node, subnet router or app connector configured.
rule: has(self.subnetRouter) || (has(self.exitNode) && self.exitNode == true) || has(self.appConnector)
- message: The appConnector field is mutually exclusive with exitNode and subnetRouter fields.
rule: '!((has(self.subnetRouter) || (has(self.exitNode) && self.exitNode == true)) && has(self.appConnector))'
status:
description: |-
ConnectorStatus describes the status of the Connector. This is set
@@ -225,6 +263,9 @@ spec:
If MagicDNS is enabled in your tailnet, it is the MagicDNS name of the
node.
type: string
isAppConnector:
description: IsAppConnector is set to true if the Connector acts as an app connector.
type: boolean
isExitNode:
description: IsExitNode is set to true if the Connector acts as an exit node.
type: boolean
@@ -499,12 +540,48 @@ spec:
enable:
description: |-
Setting enable to true will make the proxy serve Tailscale metrics
at <pod-ip>:9001/debug/metrics.
at <pod-ip>:9002/metrics.
A metrics Service named <proxy-statefulset>-metrics will also be created in the operator's namespace and will
serve the metrics at <service-ip>:9002/metrics.
In 1.78.x and 1.80.x, this field also serves as the default value for
.spec.statefulSet.pod.tailscaleContainer.debug.enable. From 1.82.0, both
fields will independently default to false.
Defaults to false.
type: boolean
serviceMonitor:
description: |-
Enable to create a Prometheus ServiceMonitor for scraping the proxy's Tailscale metrics.
The ServiceMonitor will select the metrics Service that gets created when metrics are enabled.
The ingested metrics for each Service monitor will have labels to identify the proxy:
ts_proxy_type: ingress_service|ingress_resource|connector|proxygroup
ts_proxy_parent_name: name of the parent resource (i.e name of the Connector, Tailscale Ingress, Tailscale Service or ProxyGroup)
ts_proxy_parent_namespace: namespace of the parent resource (if the parent resource is not cluster scoped)
job: ts_<proxy type>_[<parent namespace>]_<parent_name>
properties:
enable:
description: If Enable is set to true, a Prometheus ServiceMonitor will be created. Enable can only be set to true if metrics are enabled.
type: boolean
labels:
additionalProperties:
maxLength: 63
pattern: ^(([a-zA-Z0-9][-._a-zA-Z0-9]*)?[a-zA-Z0-9])?$
type: string
description: |-
Labels to add to the ServiceMonitor.
Labels must be valid Kubernetes labels.
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
type: object
required:
- enable
type: object
required:
- enable
type: object
x-kubernetes-validations:
- message: ServiceMonitor can only be enabled if metrics are enabled
rule: '!(has(self.serviceMonitor) && self.serviceMonitor.enable && !self.enable)'
statefulSet:
description: |-
Configuration parameters for the proxy's StatefulSet. Tailscale
@@ -525,6 +602,8 @@ spec:
type: object
labels:
additionalProperties:
maxLength: 63
pattern: ^(([a-zA-Z0-9][-._a-zA-Z0-9]*)?[a-zA-Z0-9])?$
type: string
description: |-
Labels that will be added to the StatefulSet created for the proxy.
@@ -1455,6 +1534,8 @@ spec:
type: array
labels:
additionalProperties:
maxLength: 63
pattern: ^(([a-zA-Z0-9][-._a-zA-Z0-9]*)?[a-zA-Z0-9])?$
type: string
description: |-
Labels that will be added to the proxy Pod.
@@ -1675,6 +1756,25 @@ spec:
tailscaleContainer:
description: Configuration for the proxy container running tailscale.
properties:
debug:
description: |-
Configuration for enabling extra debug information in the container.
Not recommended for production use.
properties:
enable:
description: |-
Enable tailscaled's HTTP pprof endpoints at <pod-ip>:9001/debug/pprof/
and internal debug metrics endpoint at <pod-ip>:9001/debug/metrics, where
9001 is a container port named "debug". The endpoints and their responses
may change in backwards incompatible ways in the future, and should not
be considered stable.
In 1.78.x and 1.80.x, this setting will default to the value of
.spec.metrics.enable, and requests to the "metrics" port matching the
mux pattern /debug/ will be forwarded to the "debug" port. In 1.82.x,
this setting will default to false, and no requests will be proxied.
type: boolean
type: object
env:
description: |-
List of environment variables to set in the container.
@@ -1786,11 +1886,12 @@ spec:
securityContext:
description: |-
Container security context.
Security context specified here will override the security context by the operator.
By default the operator:
- sets 'privileged: true' for the init container
- set NET_ADMIN capability for tailscale container for proxies that
are created for Services or Connector.
Security context specified here will override the security context set by the operator.
By default the operator sets the Tailscale container and the Tailscale init container to privileged
for proxies created for Tailscale ingress and egress Service, Connector and ProxyGroup.
You can reduce the permissions of the Tailscale container to cap NET_ADMIN by
installing device plugin in your cluster and configuring the proxies tun device to be created
by the device plugin, see https://github.com/tailscale/tailscale/issues/10814#issuecomment-2479977752
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
properties:
allowPrivilegeEscalation:
@@ -1979,6 +2080,25 @@ spec:
tailscaleInitContainer:
description: Configuration for the proxy init container that enables forwarding.
properties:
debug:
description: |-
Configuration for enabling extra debug information in the container.
Not recommended for production use.
properties:
enable:
description: |-
Enable tailscaled's HTTP pprof endpoints at <pod-ip>:9001/debug/pprof/
and internal debug metrics endpoint at <pod-ip>:9001/debug/metrics, where
9001 is a container port named "debug". The endpoints and their responses
may change in backwards incompatible ways in the future, and should not
be considered stable.
In 1.78.x and 1.80.x, this setting will default to the value of
.spec.metrics.enable, and requests to the "metrics" port matching the
mux pattern /debug/ will be forwarded to the "debug" port. In 1.82.x,
this setting will default to false, and no requests will be proxied.
type: boolean
type: object
env:
description: |-
List of environment variables to set in the container.
@@ -2090,11 +2210,12 @@ spec:
securityContext:
description: |-
Container security context.
Security context specified here will override the security context by the operator.
By default the operator:
- sets 'privileged: true' for the init container
- set NET_ADMIN capability for tailscale container for proxies that
are created for Services or Connector.
Security context specified here will override the security context set by the operator.
By default the operator sets the Tailscale container and the Tailscale init container to privileged
for proxies created for Tailscale ingress and egress Service, Connector and ProxyGroup.
You can reduce the permissions of the Tailscale container to cap NET_ADMIN by
installing device plugin in your cluster and configuring the proxies tun device to be created
by the device plugin, see https://github.com/tailscale/tailscale/issues/10814#issuecomment-2479977752
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
properties:
allowPrivilegeEscalation:
@@ -2323,6 +2444,182 @@ spec:
type: string
type: object
type: array
topologySpreadConstraints:
description: |-
Proxy Pod's topology spread constraints.
By default Tailscale Kubernetes operator does not apply any topology spread constraints.
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
items:
description: TopologySpreadConstraint specifies how to spread matching pods among the given topology.
properties:
labelSelector:
description: |-
LabelSelector is used to find matching pods.
Pods that match this label selector are counted to determine the number of pods
in their corresponding topology domain.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
matchLabelKeys:
description: |-
MatchLabelKeys is a set of pod label keys to select the pods over which
spreading will be calculated. The keys are used to lookup values from the
incoming pod labels, those key-value labels are ANDed with labelSelector
to select the group of existing pods over which spreading will be calculated
for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector.
MatchLabelKeys cannot be set when LabelSelector isn't set.
Keys that don't exist in the incoming pod labels will
be ignored. A null or empty list means only match against labelSelector.
This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).
items:
type: string
type: array
x-kubernetes-list-type: atomic
maxSkew:
description: |-
MaxSkew describes the degree to which pods may be unevenly distributed.
When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
between the number of matching pods in the target topology and the global minimum.
The global minimum is the minimum number of matching pods in an eligible domain
or zero if the number of eligible domains is less than MinDomains.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 2/2/1:
In this case, the global minimum is 1.
| zone1 | zone2 | zone3 |
| P P | P P | P |
- if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
violate MaxSkew(1).
- if MaxSkew is 2, incoming pod can be scheduled onto any zone.
When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
to topologies that satisfy it.
It's a required field. Default value is 1 and 0 is not allowed.
format: int32
type: integer
minDomains:
description: |-
MinDomains indicates a minimum number of eligible domains.
When the number of eligible domains with matching topology keys is less than minDomains,
Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
And when the number of eligible domains with matching topology keys equals or greater than minDomains,
this value has no effect on scheduling.
As a result, when the number of eligible domains is less than minDomains,
scheduler won't schedule more than maxSkew Pods to those domains.
If value is nil, the constraint behaves as if MinDomains is equal to 1.
Valid values are integers greater than 0.
When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
labelSelector spread as 2/2/2:
| zone1 | zone2 | zone3 |
| P P | P P | P P |
The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
In this situation, new pod with the same labelSelector cannot be scheduled,
because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
it will violate MaxSkew.
format: int32
type: integer
nodeAffinityPolicy:
description: |-
NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
when calculating pod topology spread skew. Options are:
- Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
nodeTaintsPolicy:
description: |-
NodeTaintsPolicy indicates how we will treat node taints when calculating
pod topology spread skew. Options are:
- Honor: nodes without taints, along with tainted nodes for which the incoming pod
has a toleration, are included.
- Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
topologyKey:
description: |-
TopologyKey is the key of node labels. Nodes that have a label with this key
and identical values are considered to be in the same topology.
We consider each <key, value> as a "bucket", and try to put balanced number
of pods into each bucket.
We define a domain as a particular instance of a topology.
Also, we define an eligible domain as a domain whose nodes meet the requirements of
nodeAffinityPolicy and nodeTaintsPolicy.
e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
It's a required field.
type: string
whenUnsatisfiable:
description: |-
WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
the spread constraint.
- DoNotSchedule (default) tells the scheduler not to schedule it.
- ScheduleAnyway tells the scheduler to schedule the pod in any location,
but giving higher precedence to topologies that would help reduce the
skew.
A constraint is considered "Unsatisfiable" for an incoming pod
if and only if every possible node assignment for that pod would violate
"MaxSkew" on some topology.
For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
labelSelector spread as 3/1/1:
| zone1 | zone2 | zone3 |
| P P P | P | P |
If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
won't make it *more* imbalanced.
It's a required field.
type: string
required:
- maxSkew
- topologyKey
- whenUnsatisfiable
type: object
type: array
type: object
type: object
tailscale:
@@ -2418,6 +2715,216 @@ spec:
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.15.1-0.20240618033008-7824932b0cab
name: proxygroups.tailscale.com
spec:
group: tailscale.com
names:
kind: ProxyGroup
listKind: ProxyGroupList
plural: proxygroups
shortNames:
- pg
singular: proxygroup
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Status of the deployed ProxyGroup resources.
jsonPath: .status.conditions[?(@.type == "ProxyGroupReady")].reason
name: Status
type: string
- description: ProxyGroup type.
jsonPath: .spec.type
name: Type
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: |-
ProxyGroup defines a set of Tailscale devices that will act as proxies.
Currently only egress ProxyGroups are supported.
Use the tailscale.com/proxy-group annotation on a Service to specify that
the egress proxy should be implemented by a ProxyGroup instead of a single
dedicated proxy. In addition to running a highly available set of proxies,
ProxyGroup also allows for serving many annotated Services from a single
set of proxies to minimise resource consumption.
More info: https://tailscale.com/kb/1438/kubernetes-operator-cluster-egress
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Spec describes the desired ProxyGroup instances.
properties:
hostnamePrefix:
description: |-
HostnamePrefix is the hostname prefix to use for tailnet devices created
by the ProxyGroup. Each device will have the integer number from its
StatefulSet pod appended to this prefix to form the full hostname.
HostnamePrefix can contain lower case letters, numbers and dashes, it
must not start with a dash and must be between 1 and 62 characters long.
pattern: ^[a-z0-9][a-z0-9-]{0,61}$
type: string
proxyClass:
description: |-
ProxyClass is the name of the ProxyClass custom resource that contains
configuration options that should be applied to the resources created
for this ProxyGroup. If unset, and there is no default ProxyClass
configured, the operator will create resources with the default
configuration.
type: string
replicas:
description: |-
Replicas specifies how many replicas to create the StatefulSet with.
Defaults to 2.
format: int32
minimum: 0
type: integer
tags:
description: |-
Tags that the Tailscale devices will be tagged with. Defaults to [tag:k8s].
If you specify custom tags here, make sure you also make the operator
an owner of these tags.
See https://tailscale.com/kb/1236/kubernetes-operator/#setting-up-the-kubernetes-operator.
Tags cannot be changed once a ProxyGroup device has been created.
Tag values must be in form ^tag:[a-zA-Z][a-zA-Z0-9-]*$.
items:
pattern: ^tag:[a-zA-Z][a-zA-Z0-9-]*$
type: string
type: array
type:
description: |-
Type of the ProxyGroup proxies. Supported types are egress and ingress.
Type is immutable once a ProxyGroup is created.
enum:
- egress
- ingress
type: string
x-kubernetes-validations:
- message: ProxyGroup type is immutable
rule: self == oldSelf
required:
- type
type: object
status:
description: |-
ProxyGroupStatus describes the status of the ProxyGroup resources. This is
set and managed by the Tailscale operator.
properties:
conditions:
description: |-
List of status conditions to indicate the status of the ProxyGroup
resources. Known condition types are `ProxyGroupReady`.
items:
description: Condition contains details for one aspect of the current state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
devices:
description: List of tailnet devices associated with the ProxyGroup StatefulSet.
items:
properties:
hostname:
description: |-
Hostname is the fully qualified domain name of the device.
If MagicDNS is enabled in your tailnet, it is the MagicDNS name of the
node.
type: string
tailnetIPs:
description: |-
TailnetIPs is the set of tailnet IP addresses (both IPv4 and IPv6)
assigned to the device.
items:
type: string
type: array
required:
- hostname
type: object
type: array
x-kubernetes-list-map-keys:
- hostname
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.15.1-0.20240618033008-7824932b0cab
@@ -2445,6 +2952,12 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: |-
Recorder defines a tsrecorder device for recording SSH sessions. By default,
it will store recordings in a local ephemeral volume. If you want to persist
recordings, you can configure an S3-compatible API for storage.
More info: https://tailscale.com/kb/1484/kubernetes-operator-deploying-tsrecorder
properties:
apiVersion:
description: |-
@@ -4084,7 +4597,7 @@ spec:
- type
x-kubernetes-list-type: map
devices:
description: List of tailnet devices associated with the Recorder statefulset.
description: List of tailnet devices associated with the Recorder StatefulSet.
items:
properties:
hostname:
@@ -4171,6 +4684,8 @@ rules:
- connectors/status
- proxyclasses
- proxyclasses/status
- proxygroups
- proxygroups/status
verbs:
- get
- list
@@ -4196,6 +4711,16 @@ rules:
- list
- watch
- update
- apiGroups:
- apiextensions.k8s.io
resourceNames:
- servicemonitors.monitoring.coreos.com
resources:
- customresourcedefinitions
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
@@ -4231,6 +4756,14 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
@@ -4253,6 +4786,9 @@ rules:
- get
- list
- watch
- create
- update
- deletecollection
- apiGroups:
- rbac.authorization.k8s.io
resources:
@@ -4265,6 +4801,16 @@ rules:
- update
- list
- watch
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- list
- update
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
@@ -4285,6 +4831,14 @@ rules:
- patch
- update
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@@ -4357,6 +4911,14 @@ spec:
value: "false"
- name: PROXY_FIREWALL_MODE
value: auto
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
image: tailscale/k8s-operator:unstable
imagePullPolicy: Always
name: operator

View File

@@ -30,7 +30,13 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true

View File

@@ -24,3 +24,11 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid

View File

@@ -10,6 +10,7 @@ import (
"encoding/json"
"fmt"
"slices"
"strings"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
@@ -98,7 +99,15 @@ func (dnsRR *dnsRecordsReconciler) Reconcile(ctx context.Context, req reconcile.
return reconcile.Result{}, nil
}
return reconcile.Result{}, dnsRR.maybeProvision(ctx, headlessSvc, logger)
if err := dnsRR.maybeProvision(ctx, headlessSvc, logger); err != nil {
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
logger.Infof("optimistic lock error, retrying: %s", err)
} else {
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
// maybeProvision ensures that dnsrecords ConfigMap contains a record for the

View File

@@ -0,0 +1,108 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package e2e
import (
"context"
"fmt"
"net/http"
"testing"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
kube "tailscale.com/k8s-operator"
"tailscale.com/tstest"
)
// See [TestMain] for test requirements.
func TestIngress(t *testing.T) {
if tsClient == nil {
t.Skip("TestIngress requires credentials for a tailscale client")
}
ctx := context.Background()
cfg := config.GetConfigOrDie()
cl, err := client.New(cfg, client.Options{})
if err != nil {
t.Fatal(err)
}
// Apply nginx
createAndCleanup(t, ctx, cl, &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "nginx",
Namespace: "default",
Labels: map[string]string{
"app.kubernetes.io/name": "nginx",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "nginx",
Image: "nginx",
},
},
},
})
// Apply service to expose it as ingress
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-ingress",
Namespace: "default",
Annotations: map[string]string{
"tailscale.com/expose": "true",
},
},
Spec: corev1.ServiceSpec{
Selector: map[string]string{
"app.kubernetes.io/name": "nginx",
},
Ports: []corev1.ServicePort{
{
Name: "http",
Protocol: "TCP",
Port: 80,
},
},
},
}
createAndCleanup(t, ctx, cl, svc)
// TODO: instead of timing out only when test times out, cancel context after 60s or so.
if err := wait.PollUntilContextCancel(ctx, time.Millisecond*100, true, func(ctx context.Context) (done bool, err error) {
maybeReadySvc := &corev1.Service{ObjectMeta: objectMeta("default", "test-ingress")}
if err := get(ctx, cl, maybeReadySvc); err != nil {
return false, err
}
isReady := kube.SvcIsReady(maybeReadySvc)
if isReady {
t.Log("Service is ready")
}
return isReady, nil
}); err != nil {
t.Fatalf("error waiting for the Service to become Ready: %v", err)
}
var resp *http.Response
if err := tstest.WaitFor(time.Second*60, func() error {
// TODO(tomhjp): Get the tailnet DNS name from the associated secret instead.
// If we are not the first tailnet node with the requested name, we'll get
// a -N suffix.
resp, err = tsClient.HTTPClient.Get(fmt.Sprintf("http://%s-%s:80", svc.Namespace, svc.Name))
if err != nil {
return err
}
return nil
}); err != nil {
t.Fatalf("error trying to reach service: %v", err)
}
if resp.StatusCode != http.StatusOK {
t.Fatalf("unexpected status: %v; response body s", resp.StatusCode)
}
}

View File

@@ -0,0 +1,194 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package e2e
import (
"context"
"errors"
"fmt"
"log"
"os"
"slices"
"strings"
"testing"
"github.com/go-logr/zapr"
"github.com/tailscale/hujson"
"go.uber.org/zap/zapcore"
"golang.org/x/oauth2/clientcredentials"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
logf "sigs.k8s.io/controller-runtime/pkg/log"
kzap "sigs.k8s.io/controller-runtime/pkg/log/zap"
"tailscale.com/client/tailscale"
)
const (
e2eManagedComment = "// This is managed by the k8s-operator e2e tests"
)
var (
tsClient *tailscale.Client
testGrants = map[string]string{
"test-proxy": `{
"src": ["tag:e2e-test-proxy"],
"dst": ["tag:k8s-operator"],
"app": {
"tailscale.com/cap/kubernetes": [{
"impersonate": {
"groups": ["ts:e2e-test-proxy"],
},
}],
},
}`,
}
)
// This test suite is currently not run in CI.
// It requires some setup not handled by this code:
// - Kubernetes cluster with tailscale operator installed
// - Current kubeconfig context set to connect to that cluster (directly, no operator proxy)
// - Operator installed with --set apiServerProxyConfig.mode="true"
// - ACLs that define tag:e2e-test-proxy tag. TODO(tomhjp): Can maybe replace this prereq onwards with an API key
// - OAuth client ID and secret in TS_API_CLIENT_ID and TS_API_CLIENT_SECRET env
// - OAuth client must have auth_keys and policy_file write for tag:e2e-test-proxy tag
func TestMain(m *testing.M) {
code, err := runTests(m)
if err != nil {
log.Fatal(err)
}
os.Exit(code)
}
func runTests(m *testing.M) (int, error) {
zlog := kzap.NewRaw([]kzap.Opts{kzap.UseDevMode(true), kzap.Level(zapcore.DebugLevel)}...).Sugar()
logf.SetLogger(zapr.NewLogger(zlog.Desugar()))
tailscale.I_Acknowledge_This_API_Is_Unstable = true
if clientID := os.Getenv("TS_API_CLIENT_ID"); clientID != "" {
cleanup, err := setupClientAndACLs()
if err != nil {
return 0, err
}
defer func() {
err = errors.Join(err, cleanup())
}()
}
return m.Run(), nil
}
func setupClientAndACLs() (cleanup func() error, _ error) {
ctx := context.Background()
credentials := clientcredentials.Config{
ClientID: os.Getenv("TS_API_CLIENT_ID"),
ClientSecret: os.Getenv("TS_API_CLIENT_SECRET"),
TokenURL: "https://login.tailscale.com/api/v2/oauth/token",
Scopes: []string{"auth_keys", "policy_file"},
}
tsClient = tailscale.NewClient("-", nil)
tsClient.HTTPClient = credentials.Client(ctx)
if err := patchACLs(ctx, tsClient, func(acls *hujson.Value) {
for test, grant := range testGrants {
deleteTestGrants(test, acls)
addTestGrant(test, grant, acls)
}
}); err != nil {
return nil, err
}
return func() error {
return patchACLs(ctx, tsClient, func(acls *hujson.Value) {
for test := range testGrants {
deleteTestGrants(test, acls)
}
})
}, nil
}
func patchACLs(ctx context.Context, tsClient *tailscale.Client, patchFn func(*hujson.Value)) error {
acls, err := tsClient.ACLHuJSON(ctx)
if err != nil {
return err
}
hj, err := hujson.Parse([]byte(acls.ACL))
if err != nil {
return err
}
patchFn(&hj)
hj.Format()
acls.ACL = hj.String()
if _, err := tsClient.SetACLHuJSON(ctx, *acls, true); err != nil {
return err
}
return nil
}
func addTestGrant(test, grant string, acls *hujson.Value) error {
v, err := hujson.Parse([]byte(grant))
if err != nil {
return err
}
// Add the managed comment to the first line of the grant object contents.
v.Value.(*hujson.Object).Members[0].Name.BeforeExtra = hujson.Extra(fmt.Sprintf("%s: %s\n", e2eManagedComment, test))
if err := acls.Patch([]byte(fmt.Sprintf(`[{"op": "add", "path": "/grants/-", "value": %s}]`, v.String()))); err != nil {
return err
}
return nil
}
func deleteTestGrants(test string, acls *hujson.Value) error {
grants := acls.Find("/grants")
var patches []string
for i, g := range grants.Value.(*hujson.Array).Elements {
members := g.Value.(*hujson.Object).Members
if len(members) == 0 {
continue
}
comment := strings.TrimSpace(string(members[0].Name.BeforeExtra))
if name, found := strings.CutPrefix(comment, e2eManagedComment+": "); found && name == test {
patches = append(patches, fmt.Sprintf(`{"op": "remove", "path": "/grants/%d"}`, i))
}
}
// Remove in reverse order so we don't affect the found indices as we mutate.
slices.Reverse(patches)
if err := acls.Patch([]byte(fmt.Sprintf("[%s]", strings.Join(patches, ",")))); err != nil {
return err
}
return nil
}
func objectMeta(namespace, name string) metav1.ObjectMeta {
return metav1.ObjectMeta{
Namespace: namespace,
Name: name,
}
}
func createAndCleanup(t *testing.T, ctx context.Context, cl client.Client, obj client.Object) {
t.Helper()
if err := cl.Create(ctx, obj); err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := cl.Delete(ctx, obj); err != nil {
t.Errorf("error cleaning up %s %s/%s: %s", obj.GetObjectKind().GroupVersionKind(), obj.GetNamespace(), obj.GetName(), err)
}
})
}
func get(ctx context.Context, cl client.Client, obj client.Object) error {
return cl.Get(ctx, client.ObjectKeyFromObject(obj), obj)
}

View File

@@ -0,0 +1,156 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package e2e
import (
"context"
"encoding/json"
"fmt"
"strings"
"testing"
"time"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
"tailscale.com/client/tailscale"
"tailscale.com/tsnet"
"tailscale.com/tstest"
)
// See [TestMain] for test requirements.
func TestProxy(t *testing.T) {
if tsClient == nil {
t.Skip("TestProxy requires credentials for a tailscale client")
}
ctx := context.Background()
cfg := config.GetConfigOrDie()
cl, err := client.New(cfg, client.Options{})
if err != nil {
t.Fatal(err)
}
// Create role and role binding to allow a group we'll impersonate to do stuff.
createAndCleanup(t, ctx, cl, &rbacv1.Role{
ObjectMeta: objectMeta("tailscale", "read-secrets"),
Rules: []rbacv1.PolicyRule{{
APIGroups: []string{""},
Verbs: []string{"get"},
Resources: []string{"secrets"},
}},
})
createAndCleanup(t, ctx, cl, &rbacv1.RoleBinding{
ObjectMeta: objectMeta("tailscale", "read-secrets"),
Subjects: []rbacv1.Subject{{
Kind: "Group",
Name: "ts:e2e-test-proxy",
}},
RoleRef: rbacv1.RoleRef{
Kind: "Role",
Name: "read-secrets",
},
})
// Get operator host name from kube secret.
operatorSecret := corev1.Secret{
ObjectMeta: objectMeta("tailscale", "operator"),
}
if err := get(ctx, cl, &operatorSecret); err != nil {
t.Fatal(err)
}
// Connect to tailnet with test-specific tag so we can use the
// [testGrants] ACLs when connecting to the API server proxy
ts := tsnetServerWithTag(t, ctx, "tag:e2e-test-proxy")
proxyCfg := &rest.Config{
Host: fmt.Sprintf("https://%s:443", hostNameFromOperatorSecret(t, operatorSecret)),
Dial: ts.Dial,
}
proxyCl, err := client.New(proxyCfg, client.Options{})
if err != nil {
t.Fatal(err)
}
// Expect success.
allowedSecret := corev1.Secret{
ObjectMeta: objectMeta("tailscale", "operator"),
}
// Wait for up to a minute the first time we use the proxy, to give it time
// to provision the TLS certs.
if err := tstest.WaitFor(time.Second*60, func() error {
return get(ctx, proxyCl, &allowedSecret)
}); err != nil {
t.Fatal(err)
}
// Expect forbidden.
forbiddenSecret := corev1.Secret{
ObjectMeta: objectMeta("default", "operator"),
}
if err := get(ctx, proxyCl, &forbiddenSecret); err == nil || !apierrors.IsForbidden(err) {
t.Fatalf("expected forbidden error fetching secret from default namespace: %s", err)
}
}
func tsnetServerWithTag(t *testing.T, ctx context.Context, tag string) *tsnet.Server {
caps := tailscale.KeyCapabilities{
Devices: tailscale.KeyDeviceCapabilities{
Create: tailscale.KeyDeviceCreateCapabilities{
Reusable: false,
Preauthorized: true,
Ephemeral: true,
Tags: []string{tag},
},
},
}
authKey, authKeyMeta, err := tsClient.CreateKey(ctx, caps)
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := tsClient.DeleteKey(ctx, authKeyMeta.ID); err != nil {
t.Errorf("error deleting auth key: %s", err)
}
})
ts := &tsnet.Server{
Hostname: "test-proxy",
Ephemeral: true,
Dir: t.TempDir(),
AuthKey: authKey,
}
_, err = ts.Up(ctx)
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := ts.Close(); err != nil {
t.Errorf("error shutting down tsnet.Server: %s", err)
}
})
return ts
}
func hostNameFromOperatorSecret(t *testing.T, s corev1.Secret) string {
profiles := map[string]any{}
if err := json.Unmarshal(s.Data["_profiles"], &profiles); err != nil {
t.Fatal(err)
}
key, ok := strings.CutPrefix(string(s.Data["_current-profile"]), "profile-")
if !ok {
t.Fatal(string(s.Data["_current-profile"]))
}
profile, ok := profiles[key]
if !ok {
t.Fatal(profiles)
}
return ((profile.(map[string]any))["Name"]).(string)
}

View File

@@ -0,0 +1,213 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"encoding/json"
"fmt"
"net/netip"
"reflect"
"strings"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsoperator "tailscale.com/k8s-operator"
"tailscale.com/kube/egressservices"
"tailscale.com/types/ptr"
)
// egressEpsReconciler reconciles EndpointSlices for tailnet services exposed to cluster via egress ProxyGroup proxies.
type egressEpsReconciler struct {
client.Client
logger *zap.SugaredLogger
tsNamespace string
}
// Reconcile reconciles an EndpointSlice for a tailnet service. It updates the EndpointSlice with the endpoints of
// those ProxyGroup Pods that are ready to route traffic to the tailnet service.
// It compares tailnet service state stored in egress proxy state Secrets by containerboot with the desired
// configuration stored in proxy-cfg ConfigMap to determine if the endpoint is ready.
func (er *egressEpsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := er.logger.With("Service", req.NamespacedName)
l.Debugf("starting reconcile")
defer l.Debugf("reconcile finished")
eps := new(discoveryv1.EndpointSlice)
err = er.Get(ctx, req.NamespacedName, eps)
if apierrors.IsNotFound(err) {
l.Debugf("EndpointSlice not found")
return reconcile.Result{}, nil
}
if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get EndpointSlice: %w", err)
}
if !eps.DeletionTimestamp.IsZero() {
l.Debugf("EnpointSlice is being deleted")
return res, nil
}
// Get the user-created ExternalName Service and use its status conditions to determine whether cluster
// resources are set up for this tailnet service.
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: eps.Labels[LabelParentName],
Namespace: eps.Labels[LabelParentNamespace],
},
}
err = er.Get(ctx, client.ObjectKeyFromObject(svc), svc)
if apierrors.IsNotFound(err) {
l.Infof("ExternalName Service %s/%s not found, perhaps it was deleted", svc.Namespace, svc.Name)
return res, nil
}
if err != nil {
return res, fmt.Errorf("error retrieving ExternalName Service: %w", err)
}
if !tsoperator.EgressServiceIsValidAndConfigured(svc) {
l.Infof("Cluster resources for ExternalName Service %s/%s are not yet configured", svc.Namespace, svc.Name)
return res, nil
}
// TODO(irbekrm): currently this reconcile loop runs all the checks every time it's triggered, which is
// wasteful. Once we have a Ready condition for ExternalName Services for ProxyGroup, use the condition to
// determine if a reconcile is needed.
oldEps := eps.DeepCopy()
proxyGroupName := eps.Labels[labelProxyGroup]
tailnetSvc := tailnetSvcName(svc)
l = l.With("tailnet-service-name", tailnetSvc)
// Retrieve the desired tailnet service configuration from the ConfigMap.
_, cfgs, err := egressSvcsConfigs(ctx, er.Client, proxyGroupName, er.tsNamespace)
if err != nil {
return res, fmt.Errorf("error retrieving tailnet services configuration: %w", err)
}
cfg, ok := (*cfgs)[tailnetSvc]
if !ok {
l.Infof("[unexpected] configuration for tailnet service %s not found", tailnetSvc)
return res, nil
}
// Check which Pods in ProxyGroup are ready to route traffic to this
// egress service.
podList := &corev1.PodList{}
if err := er.List(ctx, podList, client.MatchingLabels(pgLabels(proxyGroupName, nil))); err != nil {
return res, fmt.Errorf("error listing Pods for ProxyGroup %s: %w", proxyGroupName, err)
}
newEndpoints := make([]discoveryv1.Endpoint, 0)
for _, pod := range podList.Items {
ready, err := er.podIsReadyToRouteTraffic(ctx, pod, &cfg, tailnetSvc, l)
if err != nil {
return res, fmt.Errorf("error verifying if Pod is ready to route traffic: %w", err)
}
if !ready {
continue // maybe next time
}
podIP, err := podIPv4(&pod) // we currently only support IPv4
if err != nil {
return res, fmt.Errorf("error determining IPv4 address for Pod: %w", err)
}
newEndpoints = append(newEndpoints, discoveryv1.Endpoint{
Hostname: (*string)(&pod.UID),
Addresses: []string{podIP},
Conditions: discoveryv1.EndpointConditions{
Ready: ptr.To(true),
Serving: ptr.To(true),
Terminating: ptr.To(false),
},
})
}
// Note that Endpoints are being overwritten with the currently valid endpoints so we don't need to explicitly
// run a cleanup for deleted Pods etc.
eps.Endpoints = newEndpoints
if !reflect.DeepEqual(eps, oldEps) {
l.Infof("Updating EndpointSlice to ensure traffic is routed to ready proxy Pods")
if err := er.Update(ctx, eps); err != nil {
return res, fmt.Errorf("error updating EndpointSlice: %w", err)
}
}
return res, nil
}
func podIPv4(pod *corev1.Pod) (string, error) {
for _, ip := range pod.Status.PodIPs {
parsed, err := netip.ParseAddr(ip.IP)
if err != nil {
return "", fmt.Errorf("error parsing IP address %s: %w", ip, err)
}
if parsed.Is4() {
return parsed.String(), nil
}
}
return "", nil
}
// podIsReadyToRouteTraffic returns true if it appears that the proxy Pod has configured firewall rules to be able to
// route traffic to the given tailnet service. It retrieves the proxy's state Secret and compares the tailnet service
// status written there to the desired service configuration.
func (er *egressEpsReconciler) podIsReadyToRouteTraffic(ctx context.Context, pod corev1.Pod, cfg *egressservices.Config, tailnetSvcName string, l *zap.SugaredLogger) (bool, error) {
l = l.With("proxy_pod", pod.Name)
l.Debugf("checking whether proxy is ready to route to egress service")
if !pod.DeletionTimestamp.IsZero() {
l.Debugf("proxy Pod is being deleted, ignore")
return false, nil
}
podIP, err := podIPv4(&pod)
if err != nil {
return false, fmt.Errorf("error determining Pod IP address: %v", err)
}
if podIP == "" {
l.Infof("[unexpected] Pod does not have an IPv4 address, and IPv6 is not currently supported")
return false, nil
}
stateS := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Namespace: pod.Namespace,
},
}
err = er.Get(ctx, client.ObjectKeyFromObject(stateS), stateS)
if apierrors.IsNotFound(err) {
l.Debugf("proxy does not have a state Secret, waiting...")
return false, nil
}
if err != nil {
return false, fmt.Errorf("error getting state Secret: %w", err)
}
svcStatusBS := stateS.Data[egressservices.KeyEgressServices]
if len(svcStatusBS) == 0 {
l.Debugf("proxy's state Secret does not contain egress services status, waiting...")
return false, nil
}
svcStatus := &egressservices.Status{}
if err := json.Unmarshal(svcStatusBS, svcStatus); err != nil {
return false, fmt.Errorf("error unmarshalling egress service status: %w", err)
}
if !strings.EqualFold(podIP, svcStatus.PodIPv4) {
l.Infof("proxy's egress service status is for Pod IP %s, current proxy's Pod IP %s, waiting for the proxy to reconfigure...", svcStatus.PodIPv4, podIP)
return false, nil
}
st, ok := (*svcStatus).Services[tailnetSvcName]
if !ok {
l.Infof("proxy's state Secret does not have egress service status, waiting...")
return false, nil
}
if !reflect.DeepEqual(cfg.TailnetTarget, st.TailnetTarget) {
l.Infof("proxy has configured egress service for tailnet target %v, current target is %v, waiting for proxy to reconfigure...", st.TailnetTarget, cfg.TailnetTarget)
return false, nil
}
if !reflect.DeepEqual(cfg.Ports, st.Ports) {
l.Debugf("proxy has configured egress service for ports %#+v, wants ports %#+v, waiting for proxy to reconfigure", st.Ports, cfg.Ports)
return false, nil
}
l.Debugf("proxy is ready to route traffic to egress service")
return true, nil
}

View File

@@ -0,0 +1,211 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"encoding/json"
"fmt"
"math/rand/v2"
"testing"
"github.com/AlekSi/pointer"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/egressservices"
"tailscale.com/tstest"
"tailscale.com/util/mak"
)
func TestTailscaleEgressEndpointSlices(t *testing.T) {
clock := tstest.NewClock(tstest.ClockOpts{})
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationTailnetTargetFQDN: "foo.bar.ts.net",
AnnotationProxyGroup: "foo",
},
},
Spec: corev1.ServiceSpec{
ExternalName: "placeholder",
Type: corev1.ServiceTypeExternalName,
Selector: nil,
Ports: []corev1.ServicePort{
{
Name: "http",
Protocol: "TCP",
Port: 80,
},
},
},
Status: corev1.ServiceStatus{
Conditions: []metav1.Condition{
condition(tsapi.EgressSvcConfigured, metav1.ConditionTrue, "", "", clock),
condition(tsapi.EgressSvcValid, metav1.ConditionTrue, "", "", clock),
},
},
}
port := randomPort()
cm := configMapForSvc(t, svc, port)
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(svc, cm).
WithStatusSubresource(svc).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
er := &egressEpsReconciler{
Client: fc,
logger: zl.Sugar(),
tsNamespace: "operator-ns",
}
eps := &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "operator-ns",
Labels: map[string]string{
LabelParentName: "test",
LabelParentNamespace: "default",
labelSvcType: typeEgress,
labelProxyGroup: "foo"},
},
AddressType: discoveryv1.AddressTypeIPv4,
}
mustCreate(t, fc, eps)
t.Run("no_proxy_group_resources", func(t *testing.T) {
expectReconciled(t, er, "operator-ns", "foo") // should not error
})
t.Run("no_pods_ready_to_route_traffic", func(t *testing.T) {
pod, stateS := podAndSecretForProxyGroup("foo")
mustCreate(t, fc, pod)
mustCreate(t, fc, stateS)
expectReconciled(t, er, "operator-ns", "foo") // should not error
})
t.Run("pods_are_ready_to_route_traffic", func(t *testing.T) {
pod, stateS := podAndSecretForProxyGroup("foo")
stBs := serviceStatusForPodIP(t, svc, pod.Status.PodIPs[0].IP, port)
mustUpdate(t, fc, "operator-ns", stateS.Name, func(s *corev1.Secret) {
mak.Set(&s.Data, egressservices.KeyEgressServices, stBs)
})
expectReconciled(t, er, "operator-ns", "foo")
eps.Endpoints = append(eps.Endpoints, discoveryv1.Endpoint{
Addresses: []string{"10.0.0.1"},
Hostname: pointer.To("foo"),
Conditions: discoveryv1.EndpointConditions{
Serving: pointer.ToBool(true),
Ready: pointer.ToBool(true),
Terminating: pointer.ToBool(false),
},
})
expectEqual(t, fc, eps, nil)
})
t.Run("status_does_not_match_pod_ip", func(t *testing.T) {
_, stateS := podAndSecretForProxyGroup("foo") // replica Pod has IP 10.0.0.1
stBs := serviceStatusForPodIP(t, svc, "10.0.0.2", port) // status is for a Pod with IP 10.0.0.2
mustUpdate(t, fc, "operator-ns", stateS.Name, func(s *corev1.Secret) {
mak.Set(&s.Data, egressservices.KeyEgressServices, stBs)
})
expectReconciled(t, er, "operator-ns", "foo")
eps.Endpoints = []discoveryv1.Endpoint{}
expectEqual(t, fc, eps, nil)
})
}
func configMapForSvc(t *testing.T, svc *corev1.Service, p uint16) *corev1.ConfigMap {
t.Helper()
ports := make(map[egressservices.PortMap]struct{})
for _, port := range svc.Spec.Ports {
ports[egressservices.PortMap{Protocol: string(port.Protocol), MatchPort: p, TargetPort: uint16(port.Port)}] = struct{}{}
}
cfg := egressservices.Config{
Ports: ports,
}
if fqdn := svc.Annotations[AnnotationTailnetTargetFQDN]; fqdn != "" {
cfg.TailnetTarget = egressservices.TailnetTarget{FQDN: fqdn}
}
if ip := svc.Annotations[AnnotationTailnetTargetIP]; ip != "" {
cfg.TailnetTarget = egressservices.TailnetTarget{IP: ip}
}
name := tailnetSvcName(svc)
cfgs := egressservices.Configs{name: cfg}
bs, err := json.Marshal(&cfgs)
if err != nil {
t.Fatalf("error marshalling config: %v", err)
}
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: pgEgressCMName(svc.Annotations[AnnotationProxyGroup]),
Namespace: "operator-ns",
},
BinaryData: map[string][]byte{egressservices.KeyEgressServices: bs},
}
return cm
}
func serviceStatusForPodIP(t *testing.T, svc *corev1.Service, ip string, p uint16) []byte {
t.Helper()
ports := make(map[egressservices.PortMap]struct{})
for _, port := range svc.Spec.Ports {
ports[egressservices.PortMap{Protocol: string(port.Protocol), MatchPort: p, TargetPort: uint16(port.Port)}] = struct{}{}
}
svcSt := egressservices.ServiceStatus{Ports: ports}
if fqdn := svc.Annotations[AnnotationTailnetTargetFQDN]; fqdn != "" {
svcSt.TailnetTarget = egressservices.TailnetTarget{FQDN: fqdn}
}
if ip := svc.Annotations[AnnotationTailnetTargetIP]; ip != "" {
svcSt.TailnetTarget = egressservices.TailnetTarget{IP: ip}
}
svcName := tailnetSvcName(svc)
st := egressservices.Status{
PodIPv4: ip,
Services: map[string]*egressservices.ServiceStatus{svcName: &svcSt},
}
bs, err := json.Marshal(st)
if err != nil {
t.Fatalf("error marshalling service status: %v", err)
}
return bs
}
func podAndSecretForProxyGroup(pg string) (*corev1.Pod, *corev1.Secret) {
p := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-0", pg),
Namespace: "operator-ns",
Labels: pgLabels(pg, nil),
UID: "foo",
},
Status: corev1.PodStatus{
PodIPs: []corev1.PodIP{
{IP: "10.0.0.1"},
},
},
}
s := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-0", pg),
Namespace: "operator-ns",
Labels: pgSecretLabels(pg, "state"),
},
}
return p, s
}
func randomPort() uint16 {
return uint16(rand.Int32N(1000) + 1000)
}

View File

@@ -0,0 +1,179 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"errors"
"fmt"
"strings"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstime"
)
const (
reasonReadinessCheckFailed = "ReadinessCheckFailed"
reasonClusterResourcesNotReady = "ClusterResourcesNotReady"
reasonNoProxies = "NoProxiesConfigured"
reasonNotReady = "NotReadyToRouteTraffic"
reasonReady = "ReadyToRouteTraffic"
reasonPartiallyReady = "PartiallyReadyToRouteTraffic"
msgReadyToRouteTemplate = "%d out of %d replicas are ready to route traffic"
)
type egressSvcsReadinessReconciler struct {
client.Client
logger *zap.SugaredLogger
clock tstime.Clock
tsNamespace string
}
// Reconcile reconciles an ExternalName Service that defines a tailnet target to be exposed on a ProxyGroup and sets the
// EgressSvcReady condition on it. The condition gets set to true if at least one of the proxies is currently ready to
// route traffic to the target. It compares proxy Pod IPs with the endpoints set on the EndpointSlice for the egress
// service to determine how many replicas are currently able to route traffic.
func (esrr *egressSvcsReadinessReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := esrr.logger.With("Service", req.NamespacedName)
defer l.Info("reconcile finished")
svc := new(corev1.Service)
if err = esrr.Get(ctx, req.NamespacedName, svc); apierrors.IsNotFound(err) {
l.Info("Service not found")
return res, nil
} else if err != nil {
return res, fmt.Errorf("failed to get Service: %w", err)
}
var (
reason, msg string
st metav1.ConditionStatus = metav1.ConditionUnknown
)
oldStatus := svc.Status.DeepCopy()
defer func() {
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, st, reason, msg, esrr.clock, l)
if !apiequality.Semantic.DeepEqual(oldStatus, &svc.Status) {
err = errors.Join(err, esrr.Status().Update(ctx, svc))
}
}()
crl := egressSvcChildResourceLabels(svc)
eps, err := getSingleObject[discoveryv1.EndpointSlice](ctx, esrr.Client, esrr.tsNamespace, crl)
if err != nil {
err = fmt.Errorf("error getting EndpointSlice: %w", err)
reason = reasonReadinessCheckFailed
msg = err.Error()
return res, err
}
if eps == nil {
l.Infof("EndpointSlice for Service does not yet exist, waiting...")
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
st = metav1.ConditionFalse
return res, nil
}
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: svc.Annotations[AnnotationProxyGroup],
},
}
err = esrr.Get(ctx, client.ObjectKeyFromObject(pg), pg)
if apierrors.IsNotFound(err) {
l.Infof("ProxyGroup for Service does not exist, waiting...")
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
st = metav1.ConditionFalse
return res, nil
}
if err != nil {
err = fmt.Errorf("error retrieving ProxyGroup: %w", err)
reason = reasonReadinessCheckFailed
msg = err.Error()
return res, err
}
if !tsoperator.ProxyGroupIsReady(pg) {
l.Infof("ProxyGroup for Service is not ready, waiting...")
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
st = metav1.ConditionFalse
return res, nil
}
replicas := pgReplicas(pg)
if replicas == 0 {
l.Infof("ProxyGroup replicas set to 0")
reason, msg = reasonNoProxies, reasonNoProxies
st = metav1.ConditionFalse
return res, nil
}
podLabels := pgLabels(pg.Name, nil)
var readyReplicas int32
for i := range replicas {
podLabels[appsv1.PodIndexLabel] = fmt.Sprintf("%d", i)
pod, err := getSingleObject[corev1.Pod](ctx, esrr.Client, esrr.tsNamespace, podLabels)
if err != nil {
err = fmt.Errorf("error retrieving ProxyGroup Pod: %w", err)
reason = reasonReadinessCheckFailed
msg = err.Error()
return res, err
}
if pod == nil {
l.Infof("[unexpected] ProxyGroup is ready, but replica %d was not found", i)
reason, msg = reasonClusterResourcesNotReady, reasonClusterResourcesNotReady
return res, nil
}
l.Infof("looking at Pod with IPs %v", pod.Status.PodIPs)
ready := false
for _, ep := range eps.Endpoints {
l.Infof("looking at endpoint with addresses %v", ep.Addresses)
if endpointReadyForPod(&ep, pod, l) {
l.Infof("endpoint is ready for Pod")
ready = true
break
}
}
if ready {
readyReplicas++
}
}
msg = fmt.Sprintf(msgReadyToRouteTemplate, readyReplicas, replicas)
if readyReplicas == 0 {
reason = reasonNotReady
st = metav1.ConditionFalse
return res, nil
}
st = metav1.ConditionTrue
if readyReplicas < replicas {
reason = reasonPartiallyReady
} else {
reason = reasonReady
}
return res, nil
}
// endpointReadyForPod returns true if the endpoint is for the Pod's IPv4 address and is ready to serve traffic.
// Endpoint must not be nil.
func endpointReadyForPod(ep *discoveryv1.Endpoint, pod *corev1.Pod, l *zap.SugaredLogger) bool {
podIP, err := podIPv4(pod)
if err != nil {
l.Infof("[unexpected] error retrieving Pod's IPv4 address: %v", err)
return false
}
// Currently we only ever set a single address on and Endpoint and nothing else is meant to modify this.
if len(ep.Addresses) != 1 {
return false
}
return strings.EqualFold(ep.Addresses[0], podIP) &&
*ep.Conditions.Ready &&
*ep.Conditions.Serving &&
!*ep.Conditions.Terminating
}

View File

@@ -0,0 +1,169 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"fmt"
"testing"
"github.com/AlekSi/pointer"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tstest"
"tailscale.com/tstime"
)
func TestEgressServiceReadiness(t *testing.T) {
// We need to pass a ProxyGroup object to WithStatusSubresource because of some quirks in how the fake client
// works. Without this code further down would not be able to update ProxyGroup status.
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithStatusSubresource(&tsapi.ProxyGroup{}).
Build()
zl, _ := zap.NewDevelopment()
cl := tstest.NewClock(tstest.ClockOpts{})
rec := &egressSvcsReadinessReconciler{
tsNamespace: "operator-ns",
Client: fc,
logger: zl.Sugar(),
clock: cl,
}
tailnetFQDN := "my-app.tailnetxyz.ts.net"
egressSvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "my-app",
Namespace: "dev",
Annotations: map[string]string{
AnnotationProxyGroup: "dev",
AnnotationTailnetTargetFQDN: tailnetFQDN,
},
},
}
fakeClusterIPSvc := &corev1.Service{ObjectMeta: metav1.ObjectMeta{Name: "my-app", Namespace: "operator-ns"}}
l := egressSvcEpsLabels(egressSvc, fakeClusterIPSvc)
eps := &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: "my-app",
Namespace: "operator-ns",
Labels: l,
},
AddressType: discoveryv1.AddressTypeIPv4,
}
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: "dev",
},
}
mustCreate(t, fc, egressSvc)
setClusterNotReady(egressSvc, cl, zl.Sugar())
t.Run("endpointslice_does_not_exist", func(t *testing.T) {
expectReconciled(t, rec, "dev", "my-app")
expectEqual(t, fc, egressSvc, nil) // not ready
})
t.Run("proxy_group_does_not_exist", func(t *testing.T) {
mustCreate(t, fc, eps)
expectReconciled(t, rec, "dev", "my-app")
expectEqual(t, fc, egressSvc, nil) // still not ready
})
t.Run("proxy_group_not_ready", func(t *testing.T) {
mustCreate(t, fc, pg)
expectReconciled(t, rec, "dev", "my-app")
expectEqual(t, fc, egressSvc, nil) // still not ready
})
t.Run("no_ready_replicas", func(t *testing.T) {
setPGReady(pg, cl, zl.Sugar())
mustUpdateStatus(t, fc, pg.Namespace, pg.Name, func(p *tsapi.ProxyGroup) {
p.Status = pg.Status
})
expectEqual(t, fc, pg, nil)
for i := range pgReplicas(pg) {
p := pod(pg, i)
mustCreate(t, fc, p)
mustUpdateStatus(t, fc, p.Namespace, p.Name, func(existing *corev1.Pod) {
existing.Status.PodIPs = p.Status.PodIPs
})
}
expectReconciled(t, rec, "dev", "my-app")
setNotReady(egressSvc, cl, zl.Sugar(), pgReplicas(pg))
expectEqual(t, fc, egressSvc, nil) // still not ready
})
t.Run("one_ready_replica", func(t *testing.T) {
setEndpointForReplica(pg, 0, eps)
mustUpdate(t, fc, eps.Namespace, eps.Name, func(e *discoveryv1.EndpointSlice) {
e.Endpoints = eps.Endpoints
})
setReady(egressSvc, cl, zl.Sugar(), pgReplicas(pg), 1)
expectReconciled(t, rec, "dev", "my-app")
expectEqual(t, fc, egressSvc, nil) // partially ready
})
t.Run("all_replicas_ready", func(t *testing.T) {
for i := range pgReplicas(pg) {
setEndpointForReplica(pg, i, eps)
}
mustUpdate(t, fc, eps.Namespace, eps.Name, func(e *discoveryv1.EndpointSlice) {
e.Endpoints = eps.Endpoints
})
setReady(egressSvc, cl, zl.Sugar(), pgReplicas(pg), pgReplicas(pg))
expectReconciled(t, rec, "dev", "my-app")
expectEqual(t, fc, egressSvc, nil) // ready
})
}
func setClusterNotReady(svc *corev1.Service, cl tstime.Clock, l *zap.SugaredLogger) {
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionFalse, reasonClusterResourcesNotReady, reasonClusterResourcesNotReady, cl, l)
}
func setNotReady(svc *corev1.Service, cl tstime.Clock, l *zap.SugaredLogger, replicas int32) {
msg := fmt.Sprintf(msgReadyToRouteTemplate, 0, replicas)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionFalse, reasonNotReady, msg, cl, l)
}
func setReady(svc *corev1.Service, cl tstime.Clock, l *zap.SugaredLogger, replicas, readyReplicas int32) {
reason := reasonPartiallyReady
if readyReplicas == replicas {
reason = reasonReady
}
msg := fmt.Sprintf(msgReadyToRouteTemplate, readyReplicas, replicas)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcReady, metav1.ConditionTrue, reason, msg, cl, l)
}
func setPGReady(pg *tsapi.ProxyGroup, cl tstime.Clock, l *zap.SugaredLogger) {
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionTrue, "foo", "foo", pg.Generation, cl, l)
}
func setEndpointForReplica(pg *tsapi.ProxyGroup, ordinal int32, eps *discoveryv1.EndpointSlice) {
p := pod(pg, ordinal)
eps.Endpoints = append(eps.Endpoints, discoveryv1.Endpoint{
Addresses: []string{p.Status.PodIPs[0].IP},
Conditions: discoveryv1.EndpointConditions{
Ready: pointer.ToBool(true),
Serving: pointer.ToBool(true),
Terminating: pointer.ToBool(false),
},
})
}
func pod(pg *tsapi.ProxyGroup, ordinal int32) *corev1.Pod {
l := pgLabels(pg.Name, nil)
l[appsv1.PodIndexLabel] = fmt.Sprintf("%d", ordinal)
ip := fmt.Sprintf("10.0.0.%d", ordinal)
return &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-%d", pg.Name, ordinal),
Namespace: "operator-ns",
Labels: l,
},
Status: corev1.PodStatus{
PodIPs: []corev1.PodIP{{IP: ip}},
},
}
}

View File

@@ -0,0 +1,742 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"crypto/sha256"
"encoding/json"
"errors"
"fmt"
"math/rand/v2"
"reflect"
"slices"
"strings"
"sync"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apiserver/pkg/storage/names"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/egressservices"
"tailscale.com/kube/kubetypes"
"tailscale.com/tstime"
"tailscale.com/util/clientmetric"
"tailscale.com/util/mak"
"tailscale.com/util/set"
)
const (
reasonEgressSvcInvalid = "EgressSvcInvalid"
reasonEgressSvcValid = "EgressSvcValid"
reasonEgressSvcCreationFailed = "EgressSvcCreationFailed"
reasonProxyGroupNotReady = "ProxyGroupNotReady"
labelProxyGroup = "tailscale.com/proxy-group"
labelSvcType = "tailscale.com/svc-type" // ingress or egress
typeEgress = "egress"
// maxPorts is the maximum number of ports that can be exposed on a
// container. In practice this will be ports in range [10000 - 11000). The
// high range should make it easier to distinguish container ports from
// the tailnet target ports for debugging purposes (i.e when reading
// netfilter rules). The limit of 1000 is somewhat arbitrary, the
// assumption is that this would not be hit in practice.
maxPorts = 1000
indexEgressProxyGroup = ".metadata.annotations.egress-proxy-group"
)
var gaugeEgressServices = clientmetric.NewGauge(kubetypes.MetricEgressServiceCount)
// egressSvcsReconciler reconciles user created ExternalName Services that specify a tailnet
// endpoint that should be exposed to cluster workloads and an egress ProxyGroup
// on whose proxies it should be exposed.
type egressSvcsReconciler struct {
client.Client
logger *zap.SugaredLogger
recorder record.EventRecorder
clock tstime.Clock
tsNamespace string
mu sync.Mutex // protects following
svcs set.Slice[types.UID] // UIDs of all currently managed egress Services for ProxyGroup
}
// Reconcile reconciles an ExternalName Service that specifies a tailnet target and a ProxyGroup on whose proxies should
// forward cluster traffic to the target.
// For an ExternalName Service the reconciler:
//
// - for each port N defined on the ExternalName Service, allocates a port X in range [3000- 4000), unique for the
// ProxyGroup proxies. Proxies will forward cluster traffic received on port N to port M on the tailnet target
//
// - creates a ClusterIP Service in the operator's namespace with portmappings for all M->N port pairs. This will allow
// cluster workloads to send traffic on the user-defined tailnet target port and get it transparently mapped to the
// randomly selected port on proxy Pods.
//
// - creates an EndpointSlice in the operator's namespace with kubernetes.io/service-name label pointing to the
// ClusterIP Service. The endpoints will get dynamically updates to proxy Pod IPs as the Pods become ready to route
// traffic to the tailnet target. kubernetes.io/service-name label ensures that kube-proxy sets up routing rules to
// forward cluster traffic received on ClusterIP Service's IP address to the endpoints (Pod IPs).
//
// - updates the egress service config in a ConfigMap mounted to the ProxyGroup proxies with the tailnet target and the
// portmappings.
func (esr *egressSvcsReconciler) Reconcile(ctx context.Context, req reconcile.Request) (res reconcile.Result, err error) {
l := esr.logger.With("Service", req.NamespacedName)
defer l.Info("reconcile finished")
svc := new(corev1.Service)
if err = esr.Get(ctx, req.NamespacedName, svc); apierrors.IsNotFound(err) {
l.Info("Service not found")
return res, nil
} else if err != nil {
return res, fmt.Errorf("failed to get Service: %w", err)
}
// Name of the 'egress service', meaning the tailnet target.
tailnetSvc := tailnetSvcName(svc)
l = l.With("tailnet-service", tailnetSvc)
// Note that resources for egress Services are only cleaned up when the
// Service is actually deleted (and not if, for example, user decides to
// remove the Tailscale annotation from it). This should be fine- we
// assume that the egress ExternalName Services are always created for
// Tailscale operator specifically.
if !svc.DeletionTimestamp.IsZero() {
l.Info("Service is being deleted, ensuring resource cleanup")
return res, esr.maybeCleanup(ctx, svc, l)
}
oldStatus := svc.Status.DeepCopy()
defer func() {
if !apiequality.Semantic.DeepEqual(oldStatus, &svc.Status) {
err = errors.Join(err, esr.Status().Update(ctx, svc))
}
}()
// Validate the user-created ExternalName Service and the associated ProxyGroup.
if ok, err := esr.validateClusterResources(ctx, svc, l); err != nil {
return res, fmt.Errorf("error validating cluster resources: %w", err)
} else if !ok {
return res, nil
}
if !slices.Contains(svc.Finalizers, FinalizerName) {
svc.Finalizers = append(svc.Finalizers, FinalizerName)
if err := esr.updateSvcSpec(ctx, svc); err != nil {
err := fmt.Errorf("failed to add finalizer: %w", err)
r := svcConfiguredReason(svc, false, l)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, metav1.ConditionFalse, r, err.Error(), esr.clock, l)
return res, err
}
esr.mu.Lock()
esr.svcs.Add(svc.UID)
gaugeEgressServices.Set(int64(esr.svcs.Len()))
esr.mu.Unlock()
}
if err := esr.maybeCleanupProxyGroupConfig(ctx, svc, l); err != nil {
err = fmt.Errorf("cleaning up resources for previous ProxyGroup failed: %w", err)
r := svcConfiguredReason(svc, false, l)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, metav1.ConditionFalse, r, err.Error(), esr.clock, l)
return res, err
}
if err := esr.maybeProvision(ctx, svc, l); err != nil {
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
l.Infof("optimistic lock error, retrying: %s", err)
} else {
return reconcile.Result{}, err
}
}
return res, nil
}
func (esr *egressSvcsReconciler) maybeProvision(ctx context.Context, svc *corev1.Service, l *zap.SugaredLogger) (err error) {
r := svcConfiguredReason(svc, false, l)
st := metav1.ConditionFalse
defer func() {
msg := r
if st != metav1.ConditionTrue && err != nil {
msg = err.Error()
}
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcConfigured, st, r, msg, esr.clock, l)
}()
crl := egressSvcChildResourceLabels(svc)
clusterIPSvc, err := getSingleObject[corev1.Service](ctx, esr.Client, esr.tsNamespace, crl)
if err != nil {
err = fmt.Errorf("error retrieving ClusterIP Service: %w", err)
return err
}
if clusterIPSvc == nil {
clusterIPSvc = esr.clusterIPSvcForEgress(crl)
}
upToDate := svcConfigurationUpToDate(svc, l)
provisioned := true
if !upToDate {
if clusterIPSvc, provisioned, err = esr.provision(ctx, svc.Annotations[AnnotationProxyGroup], svc, clusterIPSvc, l); err != nil {
return err
}
}
if !provisioned {
l.Infof("unable to provision cluster resources")
return nil
}
// Update ExternalName Service to point at the ClusterIP Service.
clusterDomain := retrieveClusterDomain(esr.tsNamespace, l)
clusterIPSvcFQDN := fmt.Sprintf("%s.%s.svc.%s", clusterIPSvc.Name, clusterIPSvc.Namespace, clusterDomain)
if svc.Spec.ExternalName != clusterIPSvcFQDN {
l.Infof("Configuring ExternalName Service to point to ClusterIP Service %s", clusterIPSvcFQDN)
svc.Spec.ExternalName = clusterIPSvcFQDN
if err = esr.updateSvcSpec(ctx, svc); err != nil {
err = fmt.Errorf("error updating ExternalName Service: %w", err)
return err
}
}
r = svcConfiguredReason(svc, true, l)
st = metav1.ConditionTrue
return nil
}
func (esr *egressSvcsReconciler) provision(ctx context.Context, proxyGroupName string, svc, clusterIPSvc *corev1.Service, l *zap.SugaredLogger) (*corev1.Service, bool, error) {
l.Infof("updating configuration...")
usedPorts, err := esr.usedPortsForPG(ctx, proxyGroupName)
if err != nil {
return nil, false, fmt.Errorf("error calculating used ports for ProxyGroup %s: %w", proxyGroupName, err)
}
oldClusterIPSvc := clusterIPSvc.DeepCopy()
// loop over ClusterIP Service ports, remove any that are not needed.
for i := len(clusterIPSvc.Spec.Ports) - 1; i >= 0; i-- {
pm := clusterIPSvc.Spec.Ports[i]
found := false
for _, wantsPM := range svc.Spec.Ports {
if wantsPM.Port == pm.Port && strings.EqualFold(string(wantsPM.Protocol), string(pm.Protocol)) {
// We don't use the port name to distinguish this port internally, but Kubernetes
// require that, for Service ports with more than one name each port is uniquely named.
// So we can always pick the port name from the ExternalName Service as at this point we
// know that those are valid names because Kuberentes already validated it once. Note
// that users could have changed an unnamed port to a named port and might have changed
// port names- this should still work.
// https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
// See also https://github.com/tailscale/tailscale/issues/13406#issuecomment-2507230388
clusterIPSvc.Spec.Ports[i].Name = wantsPM.Name
found = true
break
}
}
if !found {
l.Debugf("portmapping %s:%d -> %s:%d is no longer required, removing", pm.Protocol, pm.TargetPort.IntVal, pm.Protocol, pm.Port)
clusterIPSvc.Spec.Ports = slices.Delete(clusterIPSvc.Spec.Ports, i, i+1)
}
}
// loop over ExternalName Service ports, for each one not found on
// ClusterIP Service produce new target port and add a portmapping to
// the ClusterIP Service.
for _, wantsPM := range svc.Spec.Ports {
found := false
for _, gotPM := range clusterIPSvc.Spec.Ports {
if wantsPM.Port == gotPM.Port && strings.EqualFold(string(wantsPM.Protocol), string(gotPM.Protocol)) {
found = true
break
}
}
if !found {
// Calculate a free port to expose on container and add
// a new PortMap to the ClusterIP Service.
if usedPorts.Len() >= maxPorts {
// TODO(irbekrm): refactor to avoid extra reconciles here. Low priority as in practice,
// the limit should not be hit.
return nil, false, fmt.Errorf("unable to allocate additional ports on ProxyGroup %s, %d ports already used. Create another ProxyGroup or open an issue if you believe this is unexpected.", proxyGroupName, maxPorts)
}
p := unusedPort(usedPorts)
l.Debugf("mapping tailnet target port %d to container port %d", wantsPM.Port, p)
usedPorts.Insert(p)
clusterIPSvc.Spec.Ports = append(clusterIPSvc.Spec.Ports, corev1.ServicePort{
Name: wantsPM.Name,
Protocol: wantsPM.Protocol,
Port: wantsPM.Port,
TargetPort: intstr.FromInt32(p),
})
}
}
if !reflect.DeepEqual(clusterIPSvc, oldClusterIPSvc) {
if clusterIPSvc, err = createOrUpdate(ctx, esr.Client, esr.tsNamespace, clusterIPSvc, func(svc *corev1.Service) {
svc.Labels = clusterIPSvc.Labels
svc.Spec = clusterIPSvc.Spec
}); err != nil {
return nil, false, fmt.Errorf("error ensuring ClusterIP Service: %v", err)
}
}
crl := egressSvcEpsLabels(svc, clusterIPSvc)
// TODO(irbekrm): support IPv6, but need to investigate how kube proxy
// sets up Service -> Pod routing when IPv6 is involved.
eps := &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-ipv4", clusterIPSvc.Name),
Namespace: esr.tsNamespace,
Labels: crl,
},
AddressType: discoveryv1.AddressTypeIPv4,
Ports: epsPortsFromSvc(clusterIPSvc),
}
if eps, err = createOrUpdate(ctx, esr.Client, esr.tsNamespace, eps, func(e *discoveryv1.EndpointSlice) {
e.Labels = eps.Labels
e.AddressType = eps.AddressType
e.Ports = eps.Ports
for _, p := range e.Endpoints {
p.Conditions.Ready = nil
}
}); err != nil {
return nil, false, fmt.Errorf("error ensuring EndpointSlice: %w", err)
}
cm, cfgs, err := egressSvcsConfigs(ctx, esr.Client, proxyGroupName, esr.tsNamespace)
if err != nil {
return nil, false, fmt.Errorf("error retrieving egress services configuration: %w", err)
}
if cm == nil {
l.Info("ConfigMap not yet created, waiting..")
return nil, false, nil
}
tailnetSvc := tailnetSvcName(svc)
gotCfg := (*cfgs)[tailnetSvc]
wantsCfg := egressSvcCfg(svc, clusterIPSvc)
if !reflect.DeepEqual(gotCfg, wantsCfg) {
l.Debugf("updating egress services ConfigMap %s", cm.Name)
mak.Set(cfgs, tailnetSvc, wantsCfg)
bs, err := json.Marshal(cfgs)
if err != nil {
return nil, false, fmt.Errorf("error marshalling egress services configs: %w", err)
}
mak.Set(&cm.BinaryData, egressservices.KeyEgressServices, bs)
if err := esr.Update(ctx, cm); err != nil {
return nil, false, fmt.Errorf("error updating egress services ConfigMap: %w", err)
}
}
l.Infof("egress service configuration has been updated")
return clusterIPSvc, true, nil
}
func (esr *egressSvcsReconciler) maybeCleanup(ctx context.Context, svc *corev1.Service, logger *zap.SugaredLogger) error {
logger.Info("ensuring that resources created for egress service are deleted")
// Delete egress service config from the ConfigMap mounted by the proxies.
if err := esr.ensureEgressSvcCfgDeleted(ctx, svc, logger); err != nil {
return fmt.Errorf("error deleting egress service config: %w", err)
}
// Delete the ClusterIP Service and EndpointSlice for the egress
// service.
types := []client.Object{
&corev1.Service{},
&discoveryv1.EndpointSlice{},
}
crl := egressSvcChildResourceLabels(svc)
for _, typ := range types {
if err := esr.DeleteAllOf(ctx, typ, client.InNamespace(esr.tsNamespace), client.MatchingLabels(crl)); err != nil {
return fmt.Errorf("error deleting %s: %w", typ, err)
}
}
ix := slices.Index(svc.Finalizers, FinalizerName)
if ix != -1 {
logger.Debug("Removing Tailscale finalizer from Service")
svc.Finalizers = append(svc.Finalizers[:ix], svc.Finalizers[ix+1:]...)
if err := esr.Update(ctx, svc); err != nil {
return fmt.Errorf("failed to remove finalizer: %w", err)
}
}
esr.mu.Lock()
esr.svcs.Remove(svc.UID)
gaugeEgressServices.Set(int64(esr.svcs.Len()))
esr.mu.Unlock()
logger.Info("successfully cleaned up resources for egress Service")
return nil
}
func (esr *egressSvcsReconciler) maybeCleanupProxyGroupConfig(ctx context.Context, svc *corev1.Service, l *zap.SugaredLogger) error {
wantsProxyGroup := svc.Annotations[AnnotationProxyGroup]
cond := tsoperator.GetServiceCondition(svc, tsapi.EgressSvcConfigured)
if cond == nil {
return nil
}
ss := strings.Split(cond.Reason, ":")
if len(ss) < 3 {
return nil
}
if strings.EqualFold(wantsProxyGroup, ss[2]) {
return nil
}
esr.logger.Infof("egress Service configured on ProxyGroup %s, wants ProxyGroup %s, cleaning up...", ss[2], wantsProxyGroup)
if err := esr.ensureEgressSvcCfgDeleted(ctx, svc, l); err != nil {
return fmt.Errorf("error deleting egress service config: %w", err)
}
return nil
}
// usedPortsForPG calculates the currently used match ports for ProxyGroup
// containers. It does that by looking by retrieving all target ports of all
// ClusterIP Services created for egress services exposed on this ProxyGroup's
// proxies.
// TODO(irbekrm): this is currently good enough because we only have a single worker and
// because these Services are created by us, so we can always expect to get the
// latest ClusterIP Services via the controller cache. It will not work as well
// once we split into multiple workers- at that point we probably want to set
// used ports on ProxyGroup's status.
func (esr *egressSvcsReconciler) usedPortsForPG(ctx context.Context, pg string) (sets.Set[int32], error) {
svcList := &corev1.ServiceList{}
if err := esr.List(ctx, svcList, client.InNamespace(esr.tsNamespace), client.MatchingLabels(map[string]string{labelProxyGroup: pg})); err != nil {
return nil, fmt.Errorf("error listing Services: %w", err)
}
usedPorts := sets.New[int32]()
for _, s := range svcList.Items {
for _, p := range s.Spec.Ports {
usedPorts.Insert(p.TargetPort.IntVal)
}
}
return usedPorts, nil
}
// clusterIPSvcForEgress returns a template for the ClusterIP Service created
// for an egress service exposed on ProxyGroup proxies. The ClusterIP Service
// has no selector. Traffic sent to it will be routed to the endpoints defined
// by an EndpointSlice created for this egress service.
func (esr *egressSvcsReconciler) clusterIPSvcForEgress(crl map[string]string) *corev1.Service {
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
GenerateName: svcNameBase(crl[LabelParentName]),
Namespace: esr.tsNamespace,
Labels: crl,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
},
}
}
func (esr *egressSvcsReconciler) ensureEgressSvcCfgDeleted(ctx context.Context, svc *corev1.Service, logger *zap.SugaredLogger) error {
crl := egressSvcChildResourceLabels(svc)
cmName := pgEgressCMName(crl[labelProxyGroup])
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: cmName,
Namespace: esr.tsNamespace,
},
}
l := logger.With("ConfigMap", client.ObjectKeyFromObject(cm))
l.Debug("ensuring that egress service configuration is removed from proxy config")
if err := esr.Get(ctx, client.ObjectKeyFromObject(cm), cm); apierrors.IsNotFound(err) {
l.Debugf("ConfigMap not found")
return nil
} else if err != nil {
return fmt.Errorf("error retrieving ConfigMap: %w", err)
}
bs := cm.BinaryData[egressservices.KeyEgressServices]
if len(bs) == 0 {
l.Debugf("ConfigMap does not contain egress service configs")
return nil
}
cfgs := &egressservices.Configs{}
if err := json.Unmarshal(bs, cfgs); err != nil {
return fmt.Errorf("error unmarshalling egress services configs")
}
tailnetSvc := tailnetSvcName(svc)
_, ok := (*cfgs)[tailnetSvc]
if !ok {
l.Debugf("ConfigMap does not contain egress service config, likely because it was already deleted")
return nil
}
l.Infof("before deleting config %+#v", *cfgs)
delete(*cfgs, tailnetSvc)
l.Infof("after deleting config %+#v", *cfgs)
bs, err := json.Marshal(cfgs)
if err != nil {
return fmt.Errorf("error marshalling egress services configs: %w", err)
}
mak.Set(&cm.BinaryData, egressservices.KeyEgressServices, bs)
return esr.Update(ctx, cm)
}
func (esr *egressSvcsReconciler) validateClusterResources(ctx context.Context, svc *corev1.Service, l *zap.SugaredLogger) (bool, error) {
proxyGroupName := svc.Annotations[AnnotationProxyGroup]
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: proxyGroupName,
},
}
if err := esr.Get(ctx, client.ObjectKeyFromObject(pg), pg); apierrors.IsNotFound(err) {
l.Infof("ProxyGroup %q not found, waiting...", proxyGroupName)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, esr.clock, l)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, nil
} else if err != nil {
err := fmt.Errorf("unable to retrieve ProxyGroup %s: %w", proxyGroupName, err)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, err.Error(), esr.clock, l)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, err
}
if violations := validateEgressService(svc, pg); len(violations) > 0 {
msg := fmt.Sprintf("invalid egress Service: %s", strings.Join(violations, ", "))
esr.recorder.Event(svc, corev1.EventTypeWarning, "INVALIDSERVICE", msg)
l.Info(msg)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionFalse, reasonEgressSvcInvalid, msg, esr.clock, l)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, nil
}
if !tsoperator.ProxyGroupIsReady(pg) {
l.Infof("ProxyGroup %s is not ready, waiting...", proxyGroupName)
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, esr.clock, l)
tsoperator.RemoveServiceCondition(svc, tsapi.EgressSvcConfigured)
return false, nil
}
l.Debugf("egress service is valid")
tsoperator.SetServiceCondition(svc, tsapi.EgressSvcValid, metav1.ConditionTrue, reasonEgressSvcValid, reasonEgressSvcValid, esr.clock, l)
return true, nil
}
func validateEgressService(svc *corev1.Service, pg *tsapi.ProxyGroup) []string {
violations := validateService(svc)
// We check that only one of these two is set in the earlier validateService function.
if svc.Annotations[AnnotationTailnetTargetFQDN] == "" && svc.Annotations[AnnotationTailnetTargetIP] == "" {
violations = append(violations, fmt.Sprintf("egress Service for ProxyGroup must have one of %s, %s annotations set", AnnotationTailnetTargetFQDN, AnnotationTailnetTargetIP))
}
if len(svc.Spec.Ports) == 0 {
violations = append(violations, "egress Service for ProxyGroup must have at least one target Port specified")
}
if svc.Spec.Type != corev1.ServiceTypeExternalName {
violations = append(violations, fmt.Sprintf("unexpected egress Service type %s. The only supported type is ExternalName.", svc.Spec.Type))
}
if pg.Spec.Type != tsapi.ProxyGroupTypeEgress {
violations = append(violations, fmt.Sprintf("egress Service references ProxyGroup of type %s, must be type %s", pg.Spec.Type, tsapi.ProxyGroupTypeEgress))
}
return violations
}
// egressSvcNameBase returns a name base that can be passed to
// ObjectMeta.GenerateName to generate a name for the ClusterIP Service.
// The generated name needs to be short enough so that it can later be used to
// generate a valid Kubernetes resource name for the EndpointSlice in form
// 'ipv4-|ipv6-<ClusterIP Service name>.
// A valid Kubernetes resource name must not be longer than 253 chars.
func svcNameBase(s string) string {
// -ipv4 - ipv6
const maxClusterIPSvcNameLength = 253 - 5
base := fmt.Sprintf("ts-%s-", s)
generator := names.SimpleNameGenerator
for {
generatedName := generator.GenerateName(base)
excess := len(generatedName) - maxClusterIPSvcNameLength
if excess <= 0 {
return base
}
base = base[:len(base)-1-excess] // cut off the excess chars
base = base + "-" // re-instate the dash
}
}
// unusedPort returns a port in range [10000 - 11000). The caller must ensure that
// usedPorts does not contain all ports in range [10000 - 11000).
func unusedPort(usedPorts sets.Set[int32]) int32 {
foundFreePort := false
var suggestPort int32
for !foundFreePort {
suggestPort = rand.Int32N(maxPorts) + 10000
if !usedPorts.Has(suggestPort) {
foundFreePort = true
}
}
return suggestPort
}
// tailnetTargetFromSvc returns a tailnet target for the given egress Service.
// Service must contain exactly one of tailscale.com/tailnet-ip,
// tailscale.com/tailnet-fqdn annotations.
func tailnetTargetFromSvc(svc *corev1.Service) egressservices.TailnetTarget {
if fqdn := svc.Annotations[AnnotationTailnetTargetFQDN]; fqdn != "" {
return egressservices.TailnetTarget{
FQDN: fqdn,
}
}
return egressservices.TailnetTarget{
IP: svc.Annotations[AnnotationTailnetTargetIP],
}
}
func egressSvcCfg(externalNameSvc, clusterIPSvc *corev1.Service) egressservices.Config {
tt := tailnetTargetFromSvc(externalNameSvc)
cfg := egressservices.Config{TailnetTarget: tt}
for _, svcPort := range clusterIPSvc.Spec.Ports {
pm := portMap(svcPort)
mak.Set(&cfg.Ports, pm, struct{}{})
}
return cfg
}
func portMap(p corev1.ServicePort) egressservices.PortMap {
// TODO (irbekrm): out of bounds check?
return egressservices.PortMap{Protocol: string(p.Protocol), MatchPort: uint16(p.TargetPort.IntVal), TargetPort: uint16(p.Port)}
}
func isEgressSvcForProxyGroup(obj client.Object) bool {
s, ok := obj.(*corev1.Service)
if !ok {
return false
}
annots := s.ObjectMeta.Annotations
return annots[AnnotationProxyGroup] != "" && (annots[AnnotationTailnetTargetFQDN] != "" || annots[AnnotationTailnetTargetIP] != "")
}
// egressSvcConfig returns a ConfigMap that contains egress services configuration for the provided ProxyGroup as well
// as unmarshalled configuration from the ConfigMap.
func egressSvcsConfigs(ctx context.Context, cl client.Client, proxyGroupName, tsNamespace string) (cm *corev1.ConfigMap, cfgs *egressservices.Configs, err error) {
name := pgEgressCMName(proxyGroupName)
cm = &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: tsNamespace,
},
}
if err := cl.Get(ctx, client.ObjectKeyFromObject(cm), cm); err != nil {
return nil, nil, fmt.Errorf("error retrieving egress services ConfigMap %s: %v", name, err)
}
cfgs = &egressservices.Configs{}
if len(cm.BinaryData[egressservices.KeyEgressServices]) != 0 {
if err := json.Unmarshal(cm.BinaryData[egressservices.KeyEgressServices], cfgs); err != nil {
return nil, nil, fmt.Errorf("error unmarshaling egress services config %v: %w", cm.BinaryData[egressservices.KeyEgressServices], err)
}
}
return cm, cfgs, nil
}
// egressSvcChildResourceLabels returns labels that should be applied to the
// ClusterIP Service and the EndpointSlice created for the egress service.
// TODO(irbekrm): we currently set a bunch of labels based on Kubernetes
// resource names (ProxyGroup, Service). Maximum allowed label length is 63
// chars whilst the maximum allowed resource name length is 253 chars, so we
// should probably validate and truncate (?) the names is they are too long.
func egressSvcChildResourceLabels(svc *corev1.Service) map[string]string {
return map[string]string{
LabelManaged: "true",
LabelParentType: "svc",
LabelParentName: svc.Name,
LabelParentNamespace: svc.Namespace,
labelProxyGroup: svc.Annotations[AnnotationProxyGroup],
labelSvcType: typeEgress,
}
}
// egressEpsLabels returns labels to be added to an EndpointSlice created for an egress service.
func egressSvcEpsLabels(extNSvc, clusterIPSvc *corev1.Service) map[string]string {
l := egressSvcChildResourceLabels(extNSvc)
// Adding this label is what makes kube proxy set up rules to route traffic sent to the clusterIP Service to the
// endpoints defined on this EndpointSlice.
// https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#ownership
l[discoveryv1.LabelServiceName] = clusterIPSvc.Name
// Kubernetes recommends setting this label.
// https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/#management
l[discoveryv1.LabelManagedBy] = "tailscale.com"
return l
}
func svcConfigurationUpToDate(svc *corev1.Service, l *zap.SugaredLogger) bool {
cond := tsoperator.GetServiceCondition(svc, tsapi.EgressSvcConfigured)
if cond == nil {
return false
}
if cond.Status != metav1.ConditionTrue {
return false
}
wantsReadyReason := svcConfiguredReason(svc, true, l)
return strings.EqualFold(wantsReadyReason, cond.Reason)
}
func cfgHash(c cfg, l *zap.SugaredLogger) string {
bs, err := json.Marshal(c)
if err != nil {
// Don't use l.Error as that messes up component logs with, in this case, unnecessary stack trace.
l.Infof("error marhsalling Config: %v", err)
return ""
}
h := sha256.New()
if _, err := h.Write(bs); err != nil {
// Don't use l.Error as that messes up component logs with, in this case, unnecessary stack trace.
l.Infof("error producing Config hash: %v", err)
return ""
}
return fmt.Sprintf("%x", h.Sum(nil))
}
type cfg struct {
Ports []corev1.ServicePort `json:"ports"`
TailnetTarget egressservices.TailnetTarget `json:"tailnetTarget"`
ProxyGroup string `json:"proxyGroup"`
}
func svcConfiguredReason(svc *corev1.Service, configured bool, l *zap.SugaredLogger) string {
var r string
if configured {
r = "ConfiguredFor:"
} else {
r = fmt.Sprintf("ConfigurationFailed:%s", r)
}
r += fmt.Sprintf("ProxyGroup:%s", svc.Annotations[AnnotationProxyGroup])
tt := tailnetTargetFromSvc(svc)
s := cfg{
Ports: svc.Spec.Ports,
TailnetTarget: tt,
ProxyGroup: svc.Annotations[AnnotationProxyGroup],
}
r += fmt.Sprintf(":Config:%s", cfgHash(s, l))
return r
}
// tailnetSvc accepts and ExternalName Service name and returns a name that will be used to distinguish this tailnet
// service from other tailnet services exposed to cluster workloads.
func tailnetSvcName(extNSvc *corev1.Service) string {
return fmt.Sprintf("%s-%s", extNSvc.Namespace, extNSvc.Name)
}
// epsPortsFromSvc takes the ClusterIP Service created for an egress service and
// returns its Port array in a form that can be used for an EndpointSlice.
func epsPortsFromSvc(svc *corev1.Service) (ep []discoveryv1.EndpointPort) {
for _, p := range svc.Spec.Ports {
ep = append(ep, discoveryv1.EndpointPort{
Protocol: &p.Protocol,
Port: &p.TargetPort.IntVal,
Name: &p.Name,
})
}
return ep
}
// updateSvcSpec ensures that the given Service's spec is updated in cluster, but the local Service object still retains
// the not-yet-applied status.
// TODO(irbekrm): once we do SSA for these patch updates, this will no longer be needed.
func (esr *egressSvcsReconciler) updateSvcSpec(ctx context.Context, svc *corev1.Service) error {
st := svc.Status.DeepCopy()
err := esr.Update(ctx, svc)
svc.Status = *st
return err
}

View File

@@ -0,0 +1,303 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"encoding/json"
"fmt"
"testing"
"github.com/AlekSi/pointer"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
discoveryv1 "k8s.io/api/discovery/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/egressservices"
"tailscale.com/tstest"
"tailscale.com/tstime"
)
func TestTailscaleEgressServices(t *testing.T) {
pg := &tsapi.ProxyGroup{
TypeMeta: metav1.TypeMeta{Kind: "ProxyGroup", APIVersion: "tailscale.com/v1alpha1"},
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
UID: types.UID("1234-UID"),
},
Spec: tsapi.ProxyGroupSpec{
Replicas: pointer.To[int32](3),
Type: tsapi.ProxyGroupTypeEgress,
},
}
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: pgEgressCMName("foo"),
Namespace: "operator-ns",
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pg, cm).
WithStatusSubresource(pg).
Build()
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
clock := tstest.NewClock(tstest.ClockOpts{})
esr := &egressSvcsReconciler{
Client: fc,
logger: zl.Sugar(),
clock: clock,
tsNamespace: "operator-ns",
}
tailnetTargetFQDN := "foo.bar.ts.net."
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationTailnetTargetFQDN: tailnetTargetFQDN,
AnnotationProxyGroup: "foo",
},
},
Spec: corev1.ServiceSpec{
ExternalName: "placeholder",
Type: corev1.ServiceTypeExternalName,
Selector: nil,
Ports: []corev1.ServicePort{
{
Name: "http",
Protocol: "TCP",
Port: 80,
},
{
Name: "https",
Protocol: "TCP",
Port: 443,
},
},
},
}
t.Run("proxy_group_not_ready", func(t *testing.T) {
mustCreate(t, fc, svc)
expectReconciled(t, esr, "default", "test")
// Service should have EgressSvcValid condition set to Unknown.
svc.Status.Conditions = []metav1.Condition{condition(tsapi.EgressSvcValid, metav1.ConditionUnknown, reasonProxyGroupNotReady, reasonProxyGroupNotReady, clock)}
expectEqual(t, fc, svc, nil)
})
t.Run("proxy_group_ready", func(t *testing.T) {
mustUpdateStatus(t, fc, "", "foo", func(pg *tsapi.ProxyGroup) {
pg.Status.Conditions = []metav1.Condition{
condition(tsapi.ProxyGroupReady, metav1.ConditionTrue, "", "", clock),
}
})
expectReconciled(t, esr, "default", "test")
validateReadyService(t, fc, esr, svc, clock, zl, cm)
})
t.Run("service_retain_one_unnamed_port", func(t *testing.T) {
svc.Spec.Ports = []corev1.ServicePort{{Protocol: "TCP", Port: 80}}
mustUpdate(t, fc, "default", "test", func(s *corev1.Service) {
s.Spec.Ports = svc.Spec.Ports
})
expectReconciled(t, esr, "default", "test")
validateReadyService(t, fc, esr, svc, clock, zl, cm)
})
t.Run("service_add_two_named_ports", func(t *testing.T) {
svc.Spec.Ports = []corev1.ServicePort{{Protocol: "TCP", Port: 80, Name: "http"}, {Protocol: "TCP", Port: 443, Name: "https"}}
mustUpdate(t, fc, "default", "test", func(s *corev1.Service) {
s.Spec.Ports = svc.Spec.Ports
})
expectReconciled(t, esr, "default", "test")
validateReadyService(t, fc, esr, svc, clock, zl, cm)
})
t.Run("service_add_udp_port", func(t *testing.T) {
svc.Spec.Ports = append(svc.Spec.Ports, corev1.ServicePort{Port: 53, Protocol: "UDP", Name: "dns"})
mustUpdate(t, fc, "default", "test", func(s *corev1.Service) {
s.Spec.Ports = svc.Spec.Ports
})
expectReconciled(t, esr, "default", "test")
validateReadyService(t, fc, esr, svc, clock, zl, cm)
})
t.Run("service_change_protocol", func(t *testing.T) {
svc.Spec.Ports = []corev1.ServicePort{{Protocol: "TCP", Port: 80, Name: "http"}, {Protocol: "TCP", Port: 443, Name: "https"}, {Port: 53, Protocol: "TCP", Name: "tcp_dns"}}
mustUpdate(t, fc, "default", "test", func(s *corev1.Service) {
s.Spec.Ports = svc.Spec.Ports
})
expectReconciled(t, esr, "default", "test")
validateReadyService(t, fc, esr, svc, clock, zl, cm)
})
t.Run("delete_external_name_service", func(t *testing.T) {
name := findGenNameForEgressSvcResources(t, fc, svc)
if err := fc.Delete(context.Background(), svc); err != nil {
t.Fatalf("error deleting ExternalName Service: %v", err)
}
expectReconciled(t, esr, "default", "test")
// Verify that ClusterIP Service and EndpointSlice have been deleted.
expectMissing[corev1.Service](t, fc, "operator-ns", name)
expectMissing[discoveryv1.EndpointSlice](t, fc, "operator-ns", fmt.Sprintf("%s-ipv4", name))
// Verify that service config has been deleted from the ConfigMap.
mustNotHaveConfigForSvc(t, fc, svc, cm)
})
}
func validateReadyService(t *testing.T, fc client.WithWatch, esr *egressSvcsReconciler, svc *corev1.Service, clock *tstest.Clock, zl *zap.Logger, cm *corev1.ConfigMap) {
expectReconciled(t, esr, "default", "test")
// Verify that a ClusterIP Service has been created.
name := findGenNameForEgressSvcResources(t, fc, svc)
expectEqual(t, fc, clusterIPSvc(name, svc), removeTargetPortsFromSvc)
clusterSvc := mustGetClusterIPSvc(t, fc, name)
// Verify that an EndpointSlice has been created.
expectEqual(t, fc, endpointSlice(name, svc, clusterSvc), nil)
// Verify that ConfigMap contains configuration for the new egress service.
mustHaveConfigForSvc(t, fc, svc, clusterSvc, cm)
r := svcConfiguredReason(svc, true, zl.Sugar())
// Verify that the user-created ExternalName Service has Configured set to true and ExternalName pointing to the
// CluterIP Service.
svc.Status.Conditions = []metav1.Condition{
condition(tsapi.EgressSvcValid, metav1.ConditionTrue, "EgressSvcValid", "EgressSvcValid", clock),
condition(tsapi.EgressSvcConfigured, metav1.ConditionTrue, r, r, clock),
}
svc.ObjectMeta.Finalizers = []string{"tailscale.com/finalizer"}
svc.Spec.ExternalName = fmt.Sprintf("%s.operator-ns.svc.cluster.local", name)
expectEqual(t, fc, svc, nil)
}
func condition(typ tsapi.ConditionType, st metav1.ConditionStatus, r, msg string, clock tstime.Clock) metav1.Condition {
return metav1.Condition{
Type: string(typ),
Status: st,
LastTransitionTime: conditionTime(clock),
Reason: r,
Message: msg,
}
}
func findGenNameForEgressSvcResources(t *testing.T, client client.Client, svc *corev1.Service) string {
t.Helper()
labels := egressSvcChildResourceLabels(svc)
s, err := getSingleObject[corev1.Service](context.Background(), client, "operator-ns", labels)
if err != nil {
t.Fatalf("finding ClusterIP Service for ExternalName Service %s: %v", svc.Name, err)
}
if s == nil {
t.Fatalf("no ClusterIP Service found for ExternalName Service %q", svc.Name)
}
return s.GetName()
}
func clusterIPSvc(name string, extNSvc *corev1.Service) *corev1.Service {
labels := egressSvcChildResourceLabels(extNSvc)
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: "operator-ns",
GenerateName: fmt.Sprintf("ts-%s-", extNSvc.Name),
Labels: labels,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Ports: extNSvc.Spec.Ports,
},
}
}
func mustGetClusterIPSvc(t *testing.T, cl client.Client, name string) *corev1.Service {
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: "operator-ns",
},
}
if err := cl.Get(context.Background(), client.ObjectKeyFromObject(svc), svc); err != nil {
t.Fatalf("error retrieving Service")
}
return svc
}
func endpointSlice(name string, extNSvc, clusterIPSvc *corev1.Service) *discoveryv1.EndpointSlice {
labels := egressSvcChildResourceLabels(extNSvc)
labels[discoveryv1.LabelManagedBy] = "tailscale.com"
labels[discoveryv1.LabelServiceName] = name
return &discoveryv1.EndpointSlice{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-ipv4", name),
Namespace: "operator-ns",
Labels: labels,
},
Ports: portsForEndpointSlice(clusterIPSvc),
AddressType: discoveryv1.AddressTypeIPv4,
}
}
func portsForEndpointSlice(svc *corev1.Service) []discoveryv1.EndpointPort {
ports := make([]discoveryv1.EndpointPort, 0)
for _, p := range svc.Spec.Ports {
ports = append(ports, discoveryv1.EndpointPort{
Name: &p.Name,
Protocol: &p.Protocol,
Port: pointer.ToInt32(p.TargetPort.IntVal),
})
}
return ports
}
func mustHaveConfigForSvc(t *testing.T, cl client.Client, extNSvc, clusterIPSvc *corev1.Service, cm *corev1.ConfigMap) {
t.Helper()
wantsCfg := egressSvcCfg(extNSvc, clusterIPSvc)
if err := cl.Get(context.Background(), client.ObjectKeyFromObject(cm), cm); err != nil {
t.Fatalf("Error retrieving ConfigMap: %v", err)
}
name := tailnetSvcName(extNSvc)
gotCfg := configFromCM(t, cm, name)
if gotCfg == nil {
t.Fatalf("No config found for service %q", name)
}
if diff := cmp.Diff(*gotCfg, wantsCfg); diff != "" {
t.Fatalf("unexpected config for service %q (-got +want):\n%s", name, diff)
}
}
func mustNotHaveConfigForSvc(t *testing.T, cl client.Client, extNSvc *corev1.Service, cm *corev1.ConfigMap) {
t.Helper()
if err := cl.Get(context.Background(), client.ObjectKeyFromObject(cm), cm); err != nil {
t.Fatalf("Error retrieving ConfigMap: %v", err)
}
name := tailnetSvcName(extNSvc)
gotCfg := configFromCM(t, cm, name)
if gotCfg != nil {
t.Fatalf("Config %#+v for service %q found when it should not be present", gotCfg, name)
}
}
func configFromCM(t *testing.T, cm *corev1.ConfigMap, svcName string) *egressservices.Config {
t.Helper()
cfgBs, ok := cm.BinaryData[egressservices.KeyEgressServices]
if !ok {
return nil
}
cfgs := &egressservices.Configs{}
if err := json.Unmarshal(cfgBs, cfgs); err != nil {
t.Fatalf("error unmarshalling config: %v", err)
}
cfg, ok := (*cfgs)[svcName]
if ok {
return &cfg
}
return nil
}

View File

@@ -25,11 +25,13 @@ const (
proxyClassCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_proxyclasses.yaml"
dnsConfigCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_dnsconfigs.yaml"
recorderCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_recorders.yaml"
proxyGroupCRDPath = operatorDeploymentFilesPath + "/crds/tailscale.com_proxygroups.yaml"
helmTemplatesPath = operatorDeploymentFilesPath + "/chart/templates"
connectorCRDHelmTemplatePath = helmTemplatesPath + "/connector.yaml"
proxyClassCRDHelmTemplatePath = helmTemplatesPath + "/proxyclass.yaml"
dnsConfigCRDHelmTemplatePath = helmTemplatesPath + "/dnsconfig.yaml"
recorderCRDHelmTemplatePath = helmTemplatesPath + "/recorder.yaml"
proxyGroupCRDHelmTemplatePath = helmTemplatesPath + "/proxygroup.yaml"
helmConditionalStart = "{{ if .Values.installCRDs -}}\n"
helmConditionalEnd = "{{- end -}}"
@@ -146,6 +148,7 @@ func generate(baseDir string) error {
{proxyClassCRDPath, proxyClassCRDHelmTemplatePath},
{dnsConfigCRDPath, dnsConfigCRDHelmTemplatePath},
{recorderCRDPath, recorderCRDHelmTemplatePath},
{proxyGroupCRDPath, proxyGroupCRDHelmTemplatePath},
} {
if err := addCRDToHelm(crd.crdPath, crd.templatePath); err != nil {
return fmt.Errorf("error adding %s CRD to Helm templates: %w", crd.crdPath, err)
@@ -161,6 +164,7 @@ func cleanup(baseDir string) error {
proxyClassCRDHelmTemplatePath,
dnsConfigCRDHelmTemplatePath,
recorderCRDHelmTemplatePath,
proxyGroupCRDHelmTemplatePath,
} {
if err := os.Remove(filepath.Join(baseDir, path)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("error cleaning up %s: %w", path, err)

View File

@@ -62,6 +62,9 @@ func Test_generate(t *testing.T) {
if !strings.Contains(installContentsWithCRD.String(), "name: recorders.tailscale.com") {
t.Errorf("Recorder CRD not found in default chart install")
}
if !strings.Contains(installContentsWithCRD.String(), "name: proxygroups.tailscale.com") {
t.Errorf("ProxyGroup CRD not found in default chart install")
}
// Test that CRDs can be excluded from Helm chart install
installContentsWithoutCRD := bytes.NewBuffer([]byte{})
@@ -83,4 +86,7 @@ func Test_generate(t *testing.T) {
if strings.Contains(installContentsWithoutCRD.String(), "name: recorders.tailscale.com") {
t.Errorf("Recorder CRD found in chart install that should not contain a CRD")
}
if strings.Contains(installContentsWithoutCRD.String(), "name: proxygroups.tailscale.com") {
t.Errorf("ProxyGroup CRD found in chart install that should not contain a CRD")
}
}

View File

@@ -48,7 +48,7 @@ type IngressReconciler struct {
// managing. This is only used for metrics.
managedIngresses set.Slice[types.UID]
proxyDefaultClass string
defaultProxyClass string
}
var (
@@ -76,7 +76,15 @@ func (a *IngressReconciler) Reconcile(ctx context.Context, req reconcile.Request
return reconcile.Result{}, a.maybeCleanup(ctx, logger, ing)
}
return reconcile.Result{}, a.maybeProvision(ctx, logger, ing)
if err := a.maybeProvision(ctx, logger, ing); err != nil {
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
logger.Infof("optimistic lock error, retrying: %s", err)
} else {
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
func (a *IngressReconciler) maybeCleanup(ctx context.Context, logger *zap.SugaredLogger, ing *networkingv1.Ingress) error {
@@ -90,7 +98,7 @@ func (a *IngressReconciler) maybeCleanup(ctx context.Context, logger *zap.Sugare
return nil
}
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(ing.Name, ing.Namespace, "ingress")); err != nil {
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(ing.Name, ing.Namespace, "ingress"), proxyTypeIngressResource); err != nil {
return fmt.Errorf("failed to cleanup: %w", err)
} else if !done {
logger.Debugf("cleanup not done yet, waiting for next reconcile")
@@ -136,7 +144,7 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
}
}
proxyClass := proxyClassForObject(ing, a.proxyDefaultClass)
proxyClass := proxyClassForObject(ing, a.defaultProxyClass)
if proxyClass != "" {
if ready, err := proxyClassIsReady(ctx, proxyClass, a.Client); err != nil {
return fmt.Errorf("error verifying ProxyClass for Ingress: %w", err)
@@ -268,6 +276,7 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
Tags: tags,
ChildResourceLabels: crl,
ProxyClassName: proxyClass,
proxyType: proxyTypeIngressResource,
}
if val := ing.GetAnnotations()[AnnotationExperimentalForwardClusterTrafficViaL7IngresProxy]; val == "true" {
@@ -278,12 +287,12 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return fmt.Errorf("failed to provision: %w", err)
}
_, tsHost, _, err := a.ssr.DeviceInfo(ctx, crl)
dev, err := a.ssr.DeviceInfo(ctx, crl, logger)
if err != nil {
return fmt.Errorf("failed to get device ID: %w", err)
return fmt.Errorf("failed to retrieve Ingress HTTPS endpoint status: %w", err)
}
if tsHost == "" {
logger.Debugf("no Tailscale hostname known yet, waiting for proxy pod to finish auth")
if dev == nil || dev.ingressDNSName == "" {
logger.Debugf("no Ingress DNS name known yet, waiting for proxy Pod initialize and start serving Ingress")
// No hostname yet. Wait for the proxy pod to auth.
ing.Status.LoadBalancer.Ingress = nil
if err := a.Status().Update(ctx, ing); err != nil {
@@ -292,10 +301,10 @@ func (a *IngressReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil
}
logger.Debugf("setting ingress hostname to %q", tsHost)
logger.Debugf("setting Ingress hostname to %q", dev.ingressDNSName)
ing.Status.LoadBalancer.Ingress = []networkingv1.IngressLoadBalancerIngress{
{
Hostname: tsHost,
Hostname: dev.ingressDNSName,
Ports: []networkingv1.IngressPortStatus{
{
Protocol: "TCP",

View File

@@ -12,6 +12,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
@@ -141,12 +142,160 @@ func TestTailscaleIngress(t *testing.T) {
expectMissing[corev1.Secret](t, fc, "operator-ns", fullName)
}
func TestTailscaleIngressHostname(t *testing.T) {
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewFakeClient(tsIngressClass)
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
ingR := &IngressReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
tsnetServer: fakeTsnetServer,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
// 1. Resources get created for regular Ingress
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
mustCreate(t, fc, ing)
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
})
expectReconciled(t, ingR, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
mustCreate(t, fc, &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: fullName,
Namespace: "operator-ns",
UID: "test-uid",
},
})
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "ingress",
hostname: "default-test",
app: kubetypes.AppIngressResource,
}
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}},
}
opts.serveConfig = serveConfig
expectEqual(t, fc, expectedSecret(t, fc, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "ingress"), nil)
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
// 2. Ingress proxy with capability version >= 110 does not have an HTTPS endpoint set
mustUpdate(t, fc, "operator-ns", opts.secretName, func(secret *corev1.Secret) {
mak.Set(&secret.Data, "device_id", []byte("1234"))
mak.Set(&secret.Data, "tailscale_capver", []byte("110"))
mak.Set(&secret.Data, "pod_uid", []byte("test-uid"))
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
})
expectReconciled(t, ingR, "default", "test")
ing.Finalizers = append(ing.Finalizers, "tailscale.com/finalizer")
expectEqual(t, fc, ing, nil)
// 3. Ingress proxy with capability version >= 110 advertises HTTPS endpoint
mustUpdate(t, fc, "operator-ns", opts.secretName, func(secret *corev1.Secret) {
mak.Set(&secret.Data, "device_id", []byte("1234"))
mak.Set(&secret.Data, "tailscale_capver", []byte("110"))
mak.Set(&secret.Data, "pod_uid", []byte("test-uid"))
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
mak.Set(&secret.Data, "https_endpoint", []byte("foo.tailnetxyz.ts.net"))
})
expectReconciled(t, ingR, "default", "test")
ing.Status.LoadBalancer = networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{
{Hostname: "foo.tailnetxyz.ts.net", Ports: []networkingv1.IngressPortStatus{{Port: 443, Protocol: "TCP"}}},
},
}
expectEqual(t, fc, ing, nil)
// 4. Ingress proxy with capability version >= 110 does not have an HTTPS endpoint ready
mustUpdate(t, fc, "operator-ns", opts.secretName, func(secret *corev1.Secret) {
mak.Set(&secret.Data, "device_id", []byte("1234"))
mak.Set(&secret.Data, "tailscale_capver", []byte("110"))
mak.Set(&secret.Data, "pod_uid", []byte("test-uid"))
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
mak.Set(&secret.Data, "https_endpoint", []byte("no-https"))
})
expectReconciled(t, ingR, "default", "test")
ing.Status.LoadBalancer.Ingress = nil
expectEqual(t, fc, ing, nil)
// 5. Ingress proxy's state has https_endpoints set, but its capver is not matching Pod UID (downgrade)
mustUpdate(t, fc, "operator-ns", opts.secretName, func(secret *corev1.Secret) {
mak.Set(&secret.Data, "device_id", []byte("1234"))
mak.Set(&secret.Data, "tailscale_capver", []byte("110"))
mak.Set(&secret.Data, "pod_uid", []byte("not-the-right-uid"))
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
mak.Set(&secret.Data, "https_endpoint", []byte("bar.tailnetxyz.ts.net"))
})
ing.Status.LoadBalancer = networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{
{Hostname: "foo.tailnetxyz.ts.net", Ports: []networkingv1.IngressPortStatus{{Port: 443, Protocol: "TCP"}}},
},
}
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, ing, nil)
}
func TestTailscaleIngressWithProxyClass(t *testing.T) {
// Setup
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "custom-metadata"},
Spec: tsapi.ProxyClassSpec{StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Labels: tsapi.Labels{"foo": "bar"},
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
@@ -253,7 +402,7 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Status: metav1.ConditionTrue,
Type: string(tsapi.ProxyClassready),
Type: string(tsapi.ProxyClassReady),
ObservedGeneration: pc.Generation,
}}}
})
@@ -271,3 +420,143 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
opts.proxyClass = ""
expectEqual(t, fc, expectedSTSUserspace(t, fc, opts), removeHashAnnotation)
}
func TestTailscaleIngressWithServiceMonitor(t *testing.T) {
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "metrics", Generation: 1},
Spec: tsapi.ProxyClassSpec{},
Status: tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Status: metav1.ConditionTrue,
Type: string(tsapi.ProxyClassReady),
ObservedGeneration: 1,
}}},
}
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
Labels: map[string]string{
"tailscale.com/proxy-class": "metrics",
},
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
}
crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}}
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc, tsIngressClass, ing, svc).
WithStatusSubresource(pc).
Build()
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
ingR := &IngressReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
tsnetServer: fakeTsnetServer,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
}
expectReconciled(t, ingR, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "ingress")
serveConfig := &ipn.ServeConfig{
TCP: map[uint16]*ipn.TCPPortHandler{443: {HTTPS: true}},
Web: map[ipn.HostPort]*ipn.WebServerConfig{"${TS_CERT_DOMAIN}:443": {Handlers: map[string]*ipn.HTTPHandler{"/": {Proxy: "http://1.2.3.4:8080/"}}}},
}
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
tailscaleNamespace: "operator-ns",
parentType: "ingress",
hostname: "default-test",
app: kubetypes.AppIngressResource,
namespaced: true,
proxyType: proxyTypeIngressResource,
serveConfig: serveConfig,
resourceVersion: "1",
}
// 1. Enable metrics- expect metrics Service to be created
mustUpdate(t, fc, "", "metrics", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.Metrics = &tsapi.Metrics{Enable: true}
})
opts.enableMetrics = true
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, expectedMetricsService(opts), nil)
// 2. Enable ServiceMonitor - should not error when there is no ServiceMonitor CRD in cluster
mustUpdate(t, fc, "", "metrics", func(pc *tsapi.ProxyClass) {
pc.Spec.Metrics.ServiceMonitor = &tsapi.ServiceMonitor{Enable: true, Labels: tsapi.Labels{"foo": "bar"}}
})
expectReconciled(t, ingR, "default", "test")
expectEqual(t, fc, expectedMetricsService(opts), nil)
// 3. Create ServiceMonitor CRD and reconcile- ServiceMonitor should get created
mustCreate(t, fc, crd)
expectReconciled(t, ingR, "default", "test")
opts.serviceMonitorLabels = tsapi.Labels{"foo": "bar"}
expectEqual(t, fc, expectedMetricsService(opts), nil)
expectEqualUnstructured(t, fc, expectedServiceMonitor(t, opts))
// 4. Update ServiceMonitor CRD and reconcile- ServiceMonitor should get updated
mustUpdate(t, fc, pc.Namespace, pc.Name, func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.Metrics.ServiceMonitor.Labels = nil
})
expectReconciled(t, ingR, "default", "test")
opts.serviceMonitorLabels = nil
opts.resourceVersion = "2"
expectEqual(t, fc, expectedMetricsService(opts), nil)
expectEqualUnstructured(t, fc, expectedServiceMonitor(t, opts))
// 5. Disable metrics - metrics resources should get deleted.
mustUpdate(t, fc, pc.Namespace, pc.Name, func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.Metrics = nil
})
expectReconciled(t, ingR, "default", "test")
expectMissing[corev1.Service](t, fc, "operator-ns", metricsResourceName(shortName))
// ServiceMonitor gets garbage collected when the Service is deleted - we cannot test that here.
}

View File

@@ -0,0 +1,295 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"fmt"
"reflect"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
)
const (
labelMetricsTarget = "tailscale.com/metrics-target"
// These labels get transferred from the metrics Service to the ingested Prometheus metrics.
labelPromProxyType = "ts_proxy_type"
labelPromProxyParentName = "ts_proxy_parent_name"
labelPromProxyParentNamespace = "ts_proxy_parent_namespace"
labelPromJob = "ts_prom_job"
serviceMonitorCRD = "servicemonitors.monitoring.coreos.com"
)
// ServiceMonitor contains a subset of fields of servicemonitors.monitoring.coreos.com Custom Resource Definition.
// Duplicating it here allows us to avoid importing prometheus-operator library.
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L40
type ServiceMonitor struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata"`
Spec ServiceMonitorSpec `json:"spec"`
}
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L55
type ServiceMonitorSpec struct {
// Endpoints defines the endpoints to be scraped on the selected Service(s).
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L82
Endpoints []ServiceMonitorEndpoint `json:"endpoints"`
// JobLabel is the label on the Service whose value will become the value of the Prometheus job label for the metrics ingested via this ServiceMonitor.
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L66
JobLabel string `json:"jobLabel"`
// NamespaceSelector selects the namespace of Service(s) that this ServiceMonitor allows to scrape.
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L88
NamespaceSelector ServiceMonitorNamespaceSelector `json:"namespaceSelector,omitempty"`
// Selector is the label selector for Service(s) that this ServiceMonitor allows to scrape.
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L85
Selector metav1.LabelSelector `json:"selector"`
// TargetLabels are labels on the selected Service that should be applied as Prometheus labels to the ingested metrics.
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L72
TargetLabels []string `json:"targetLabels"`
}
// ServiceMonitorNamespaceSelector selects namespaces in which Prometheus operator will attempt to find Services for
// this ServiceMonitor.
// https://github.com/prometheus-operator/prometheus-operator/blob/bb4514e0d5d69f20270e29cfd4ad39b87865ccdf/pkg/apis/monitoring/v1/servicemonitor_types.go#L88
type ServiceMonitorNamespaceSelector struct {
MatchNames []string `json:"matchNames,omitempty"`
}
// ServiceMonitorEndpoint defines an endpoint of Service to scrape. We only define port here. Prometheus by default
// scrapes /metrics path, which is what we want.
type ServiceMonitorEndpoint struct {
// Port is the name of the Service port that Prometheus will scrape.
Port string `json:"port,omitempty"`
}
func reconcileMetricsResources(ctx context.Context, logger *zap.SugaredLogger, opts *metricsOpts, pc *tsapi.ProxyClass, cl client.Client) error {
if opts.proxyType == proxyTypeEgress {
// Metrics are currently not being enabled for standalone egress proxies.
return nil
}
if pc == nil || pc.Spec.Metrics == nil || !pc.Spec.Metrics.Enable {
return maybeCleanupMetricsResources(ctx, opts, cl)
}
metricsSvc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: metricsResourceName(opts.proxyStsName),
Namespace: opts.tsNamespace,
Labels: metricsResourceLabels(opts),
},
Spec: corev1.ServiceSpec{
Selector: opts.proxyLabels,
Type: corev1.ServiceTypeClusterIP,
Ports: []corev1.ServicePort{{Protocol: "TCP", Port: 9002, Name: "metrics"}},
},
}
var err error
metricsSvc, err = createOrUpdate(ctx, cl, opts.tsNamespace, metricsSvc, func(svc *corev1.Service) {
svc.Spec.Ports = metricsSvc.Spec.Ports
svc.Spec.Selector = metricsSvc.Spec.Selector
})
if err != nil {
return fmt.Errorf("error ensuring metrics Service: %w", err)
}
crdExists, err := hasServiceMonitorCRD(ctx, cl)
if err != nil {
return fmt.Errorf("error verifying that %q CRD exists: %w", serviceMonitorCRD, err)
}
if !crdExists {
return nil
}
if pc.Spec.Metrics.ServiceMonitor == nil || !pc.Spec.Metrics.ServiceMonitor.Enable {
return maybeCleanupServiceMonitor(ctx, cl, opts.proxyStsName, opts.tsNamespace)
}
logger.Infof("ensuring ServiceMonitor for metrics Service %s/%s", metricsSvc.Namespace, metricsSvc.Name)
svcMonitor, err := newServiceMonitor(metricsSvc, pc.Spec.Metrics.ServiceMonitor)
if err != nil {
return fmt.Errorf("error creating ServiceMonitor: %w", err)
}
// We don't use createOrUpdate here because that does not work with unstructured types.
existing := svcMonitor.DeepCopy()
err = cl.Get(ctx, client.ObjectKeyFromObject(metricsSvc), existing)
if apierrors.IsNotFound(err) {
if err := cl.Create(ctx, svcMonitor); err != nil {
return fmt.Errorf("error creating ServiceMonitor: %w", err)
}
return nil
}
if err != nil {
return fmt.Errorf("error getting ServiceMonitor: %w", err)
}
// Currently, we only update labels on the ServiceMonitor as those are the only values that can change.
if !reflect.DeepEqual(existing.GetLabels(), svcMonitor.GetLabels()) {
existing.SetLabels(svcMonitor.GetLabels())
if err := cl.Update(ctx, existing); err != nil {
return fmt.Errorf("error updating ServiceMonitor: %w", err)
}
}
return nil
}
// maybeCleanupMetricsResources ensures that any metrics resources created for a proxy are deleted. Only metrics Service
// gets deleted explicitly because the ServiceMonitor has Service's owner reference, so gets garbage collected
// automatically.
func maybeCleanupMetricsResources(ctx context.Context, opts *metricsOpts, cl client.Client) error {
sel := metricsSvcSelector(opts.proxyLabels, opts.proxyType)
return cl.DeleteAllOf(ctx, &corev1.Service{}, client.InNamespace(opts.tsNamespace), client.MatchingLabels(sel))
}
// maybeCleanupServiceMonitor cleans up any ServiceMonitor created for the named proxy StatefulSet.
func maybeCleanupServiceMonitor(ctx context.Context, cl client.Client, stsName, ns string) error {
smName := metricsResourceName(stsName)
sm := serviceMonitorTemplate(smName, ns)
u, err := serviceMonitorToUnstructured(sm)
if err != nil {
return fmt.Errorf("error building ServiceMonitor: %w", err)
}
err = cl.Get(ctx, types.NamespacedName{Name: smName, Namespace: ns}, u)
if apierrors.IsNotFound(err) {
return nil // nothing to do
}
if err != nil {
return fmt.Errorf("error verifying if ServiceMonitor %s/%s exists: %w", ns, stsName, err)
}
return cl.Delete(ctx, u)
}
// newServiceMonitor takes a metrics Service created for a proxy and constructs and returns a ServiceMonitor for that
// proxy that can be applied to the kube API server.
// The ServiceMonitor is returned as Unstructured type - this allows us to avoid importing prometheus-operator API server client/schema.
func newServiceMonitor(metricsSvc *corev1.Service, spec *tsapi.ServiceMonitor) (*unstructured.Unstructured, error) {
sm := serviceMonitorTemplate(metricsSvc.Name, metricsSvc.Namespace)
sm.ObjectMeta.Labels = metricsSvc.Labels
if spec != nil && len(spec.Labels) > 0 {
sm.ObjectMeta.Labels = mergeMapKeys(sm.ObjectMeta.Labels, spec.Labels.Parse())
}
sm.ObjectMeta.OwnerReferences = []metav1.OwnerReference{*metav1.NewControllerRef(metricsSvc, corev1.SchemeGroupVersion.WithKind("Service"))}
sm.Spec = ServiceMonitorSpec{
Selector: metav1.LabelSelector{MatchLabels: metricsSvc.Labels},
Endpoints: []ServiceMonitorEndpoint{{
Port: "metrics",
}},
NamespaceSelector: ServiceMonitorNamespaceSelector{
MatchNames: []string{metricsSvc.Namespace},
},
JobLabel: labelPromJob,
TargetLabels: []string{
labelPromProxyParentName,
labelPromProxyParentNamespace,
labelPromProxyType,
},
}
return serviceMonitorToUnstructured(sm)
}
// serviceMonitorToUnstructured takes a ServiceMonitor and converts it to Unstructured type that can be used by the c/r
// client in Kubernetes API server calls.
func serviceMonitorToUnstructured(sm *ServiceMonitor) (*unstructured.Unstructured, error) {
contents, err := runtime.DefaultUnstructuredConverter.ToUnstructured(sm)
if err != nil {
return nil, fmt.Errorf("error converting ServiceMonitor to Unstructured: %w", err)
}
u := &unstructured.Unstructured{}
u.SetUnstructuredContent(contents)
u.SetGroupVersionKind(sm.GroupVersionKind())
return u, nil
}
// metricsResourceName returns name for metrics Service and ServiceMonitor for a proxy StatefulSet.
func metricsResourceName(stsName string) string {
// Maximum length of StatefulSet name if 52 chars, so this is fine.
return fmt.Sprintf("%s-metrics", stsName)
}
// metricsResourceLabels constructs labels that will be applied to metrics Service and metrics ServiceMonitor for a
// proxy.
func metricsResourceLabels(opts *metricsOpts) map[string]string {
lbls := map[string]string{
LabelManaged: "true",
labelMetricsTarget: opts.proxyStsName,
labelPromProxyType: opts.proxyType,
labelPromProxyParentName: opts.proxyLabels[LabelParentName],
}
// Include namespace label for proxies created for a namespaced type.
if isNamespacedProxyType(opts.proxyType) {
lbls[labelPromProxyParentNamespace] = opts.proxyLabels[LabelParentNamespace]
}
lbls[labelPromJob] = promJobName(opts)
return lbls
}
// promJobName constructs the value of the Prometheus job label that will apply to all metrics for a ServiceMonitor.
func promJobName(opts *metricsOpts) string {
// Include parent resource namespace for proxies created for namespaced types.
if opts.proxyType == proxyTypeIngressResource || opts.proxyType == proxyTypeIngressService {
return fmt.Sprintf("ts_%s_%s_%s", opts.proxyType, opts.proxyLabels[LabelParentNamespace], opts.proxyLabels[LabelParentName])
}
return fmt.Sprintf("ts_%s_%s", opts.proxyType, opts.proxyLabels[LabelParentName])
}
// metricsSvcSelector returns the minimum label set to uniquely identify a metrics Service for a proxy.
func metricsSvcSelector(proxyLabels map[string]string, proxyType string) map[string]string {
sel := map[string]string{
labelPromProxyType: proxyType,
labelPromProxyParentName: proxyLabels[LabelParentName],
}
// Include namespace label for proxies created for a namespaced type.
if isNamespacedProxyType(proxyType) {
sel[labelPromProxyParentNamespace] = proxyLabels[LabelParentNamespace]
}
return sel
}
// serviceMonitorTemplate returns a base ServiceMonitor type that, when converted to Unstructured, is a valid type that
// can be used in kube API server calls via the c/r client.
func serviceMonitorTemplate(name, ns string) *ServiceMonitor {
return &ServiceMonitor{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceMonitor",
APIVersion: "monitoring.coreos.com/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns,
},
}
}
type metricsOpts struct {
proxyStsName string // name of StatefulSet for proxy
tsNamespace string // namespace in which Tailscale is installed
proxyLabels map[string]string // labels of the proxy StatefulSet
proxyType string
}
func isNamespacedProxyType(typ string) bool {
return typ == proxyTypeIngressResource || typ == proxyTypeIngressService
}
func mergeMapKeys(a, b map[string]string) map[string]string {
m := make(map[string]string, len(a)+len(b))
for key, val := range b {
m[key] = val
}
for key, val := range a {
m[key] = val
}
return m
}

View File

@@ -9,6 +9,7 @@ import (
"context"
"fmt"
"slices"
"strings"
"sync"
_ "embed"
@@ -86,7 +87,7 @@ func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Requ
return reconcile.Result{}, nil
}
logger.Info("Cleaning up DNSConfig resources")
if err := a.maybeCleanup(ctx, &dnsCfg, logger); err != nil {
if err := a.maybeCleanup(&dnsCfg); err != nil {
logger.Errorf("error cleaning up reconciler resource: %v", err)
return res, err
}
@@ -100,9 +101,9 @@ func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Requ
}
oldCnStatus := dnsCfg.Status.DeepCopy()
setStatus := func(dnsCfg *tsapi.DNSConfig, conditionType tsapi.ConditionType, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
setStatus := func(dnsCfg *tsapi.DNSConfig, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetDNSConfigCondition(dnsCfg, tsapi.NameserverReady, status, reason, message, dnsCfg.Generation, a.clock, logger)
if !apiequality.Semantic.DeepEqual(oldCnStatus, dnsCfg.Status) {
if !apiequality.Semantic.DeepEqual(oldCnStatus, &dnsCfg.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := a.Client.Status().Update(ctx, dnsCfg); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
@@ -118,7 +119,7 @@ func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Requ
msg := "invalid cluster configuration: more than one tailscale.com/dnsconfigs found. Please ensure that no more than one is created."
logger.Error(msg)
a.recorder.Event(&dnsCfg, corev1.EventTypeWarning, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
setStatus(&dnsCfg, metav1.ConditionFalse, reasonMultipleDNSConfigsPresent, messageMultipleDNSConfigsPresent)
}
if !slices.Contains(dnsCfg.Finalizers, FinalizerName) {
@@ -127,11 +128,16 @@ func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Requ
if err := a.Update(ctx, &dnsCfg); err != nil {
msg := fmt.Sprintf(messageNameserverCreationFailed, err)
logger.Error(msg)
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionFalse, reasonNameserverCreationFailed, msg)
return setStatus(&dnsCfg, metav1.ConditionFalse, reasonNameserverCreationFailed, msg)
}
}
if err := a.maybeProvision(ctx, &dnsCfg, logger); err != nil {
return reconcile.Result{}, fmt.Errorf("error provisioning nameserver resources: %w", err)
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
logger.Infof("optimistic lock error, retrying: %s", err)
return reconcile.Result{}, nil
} else {
return reconcile.Result{}, fmt.Errorf("error provisioning nameserver resources: %w", err)
}
}
a.mu.Lock()
@@ -149,7 +155,7 @@ func (a *NameserverReconciler) Reconcile(ctx context.Context, req reconcile.Requ
dnsCfg.Status.Nameserver = &tsapi.NameserverStatus{
IP: ip,
}
return setStatus(&dnsCfg, tsapi.NameserverReady, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated)
return setStatus(&dnsCfg, metav1.ConditionTrue, reasonNameserverCreated, reasonNameserverCreated)
}
logger.Info("nameserver Service does not have an IP address allocated, waiting...")
return reconcile.Result{}, nil
@@ -188,7 +194,7 @@ func (a *NameserverReconciler) maybeProvision(ctx context.Context, tsDNSCfg *tsa
// maybeCleanup removes DNSConfig from being tracked. The cluster resources
// created, will be automatically garbage collected as they are owned by the
// DNSConfig.
func (a *NameserverReconciler) maybeCleanup(ctx context.Context, dnsCfg *tsapi.DNSConfig, logger *zap.SugaredLogger) error {
func (a *NameserverReconciler) maybeCleanup(dnsCfg *tsapi.DNSConfig) error {
a.mu.Lock()
a.managedNameservers.Remove(dnsCfg.UID)
a.mu.Unlock()

View File

@@ -11,6 +11,7 @@ import (
"context"
"os"
"regexp"
"strconv"
"strings"
"time"
@@ -23,8 +24,11 @@ import (
discoveryv1 "k8s.io/api/discovery/v1"
networkingv1 "k8s.io/api/networking/v1"
rbacv1 "k8s.io/api/rbac/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/rest"
toolscache "k8s.io/client-go/tools/cache"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -109,7 +113,7 @@ func main() {
proxyActAsDefaultLoadBalancer: isDefaultLoadBalancer,
proxyTags: tags,
proxyFirewallMode: tsFirewallMode,
proxyDefaultClass: defaultProxyClass,
defaultProxyClass: defaultProxyClass,
}
runReconcilers(rOpts)
}
@@ -143,12 +147,20 @@ func initTSNet(zlog *zap.SugaredLogger) (*tsnet.Server, *tailscale.Client) {
TokenURL: "https://login.tailscale.com/api/v2/oauth/token",
}
tsClient := tailscale.NewClient("-", nil)
tsClient.UserAgent = "tailscale-k8s-operator"
tsClient.HTTPClient = credentials.Client(context.Background())
s := &tsnet.Server{
Hostname: hostname,
Logf: zlog.Named("tailscaled").Debugf,
}
if p := os.Getenv("TS_PORT"); p != "" {
port, err := strconv.ParseUint(p, 10, 16)
if err != nil {
startlog.Fatalf("TS_PORT %q cannot be parsed as uint16: %v", p, err)
}
s.Port = uint16(port)
}
if kubeSecret != "" {
st, err := kubestore.New(logger.Discard, kubeSecret)
if err != nil {
@@ -230,20 +242,29 @@ func runReconcilers(opts reconcilerOpts) {
nsFilter := cache.ByObject{
Field: client.InNamespace(opts.tailscaleNamespace).AsSelector(),
}
// We watch the ServiceMonitor CRD to ensure that reconcilers are re-triggered if user's workflows result in the
// ServiceMonitor CRD applied after some of our resources that define ServiceMonitor creation. This selector
// ensures that we only watch the ServiceMonitor CRD and that we don't cache full contents of it.
serviceMonitorSelector := cache.ByObject{
Field: fields.SelectorFromSet(fields.Set{"metadata.name": serviceMonitorCRD}),
Transform: crdTransformer(startlog),
}
mgrOpts := manager.Options{
// TODO (irbekrm): stricter filtering what we watch/cache/call
// reconcilers on. c/r by default starts a watch on any
// resources that we GET via the controller manager's client.
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
&corev1.Secret{}: nsFilter,
&corev1.ServiceAccount{}: nsFilter,
&corev1.ConfigMap{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&appsv1.Deployment{}: nsFilter,
&discoveryv1.EndpointSlice{}: nsFilter,
&rbacv1.Role{}: nsFilter,
&rbacv1.RoleBinding{}: nsFilter,
&corev1.Secret{}: nsFilter,
&corev1.ServiceAccount{}: nsFilter,
&corev1.Pod{}: nsFilter,
&corev1.ConfigMap{}: nsFilter,
&appsv1.StatefulSet{}: nsFilter,
&appsv1.Deployment{}: nsFilter,
&discoveryv1.EndpointSlice{}: nsFilter,
&rbacv1.Role{}: nsFilter,
&rbacv1.RoleBinding{}: nsFilter,
&apiextensionsv1.CustomResourceDefinition{}: serviceMonitorSelector,
},
},
Scheme: tsapi.GlobalScheme,
@@ -285,7 +306,7 @@ func runReconcilers(opts reconcilerOpts) {
recorder: eventRecorder,
tsNamespace: opts.tailscaleNamespace,
clock: tstime.DefaultClock{},
proxyDefaultClass: opts.proxyDefaultClass,
defaultProxyClass: opts.defaultProxyClass,
})
if err != nil {
startlog.Fatalf("could not create service reconciler: %v", err)
@@ -308,7 +329,7 @@ func runReconcilers(opts reconcilerOpts) {
recorder: eventRecorder,
Client: mgr.GetClient(),
logger: opts.log.Named("ingress-reconciler"),
proxyDefaultClass: opts.proxyDefaultClass,
defaultProxyClass: opts.defaultProxyClass,
})
if err != nil {
startlog.Fatalf("could not create ingress reconciler: %v", err)
@@ -353,8 +374,72 @@ func runReconcilers(opts reconcilerOpts) {
if err != nil {
startlog.Fatalf("could not create nameserver reconciler: %v", err)
}
egressSvcFilter := handler.EnqueueRequestsFromMapFunc(egressSvcsHandler)
egressProxyGroupFilter := handler.EnqueueRequestsFromMapFunc(egressSvcsFromEgressProxyGroup(mgr.GetClient(), opts.log))
err = builder.
ControllerManagedBy(mgr).
Named("egress-svcs-reconciler").
Watches(&corev1.Service{}, egressSvcFilter).
Watches(&tsapi.ProxyGroup{}, egressProxyGroupFilter).
Complete(&egressSvcsReconciler{
Client: mgr.GetClient(),
tsNamespace: opts.tailscaleNamespace,
recorder: eventRecorder,
clock: tstime.DefaultClock{},
logger: opts.log.Named("egress-svcs-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create egress Services reconciler: %v", err)
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), new(corev1.Service), indexEgressProxyGroup, indexEgressServices); err != nil {
startlog.Fatalf("failed setting up indexer for egress Services: %v", err)
}
egressSvcFromEpsFilter := handler.EnqueueRequestsFromMapFunc(egressSvcFromEps)
err = builder.
ControllerManagedBy(mgr).
Named("egress-svcs-readiness-reconciler").
Watches(&corev1.Service{}, egressSvcFilter).
Watches(&discoveryv1.EndpointSlice{}, egressSvcFromEpsFilter).
Complete(&egressSvcsReadinessReconciler{
Client: mgr.GetClient(),
tsNamespace: opts.tailscaleNamespace,
clock: tstime.DefaultClock{},
logger: opts.log.Named("egress-svcs-readiness-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create egress Services readiness reconciler: %v", err)
}
epsFilter := handler.EnqueueRequestsFromMapFunc(egressEpsHandler)
podsFilter := handler.EnqueueRequestsFromMapFunc(egressEpsFromPGPods(mgr.GetClient(), opts.tailscaleNamespace))
secretsFilter := handler.EnqueueRequestsFromMapFunc(egressEpsFromPGStateSecrets(mgr.GetClient(), opts.tailscaleNamespace))
epsFromExtNSvcFilter := handler.EnqueueRequestsFromMapFunc(epsFromExternalNameService(mgr.GetClient(), opts.log, opts.tailscaleNamespace))
err = builder.
ControllerManagedBy(mgr).
Named("egress-eps-reconciler").
Watches(&discoveryv1.EndpointSlice{}, epsFilter).
Watches(&corev1.Pod{}, podsFilter).
Watches(&corev1.Secret{}, secretsFilter).
Watches(&corev1.Service{}, epsFromExtNSvcFilter).
Complete(&egressEpsReconciler{
Client: mgr.GetClient(),
tsNamespace: opts.tailscaleNamespace,
logger: opts.log.Named("egress-eps-reconciler"),
})
if err != nil {
startlog.Fatalf("could not create egress EndpointSlices reconciler: %v", err)
}
// ProxyClass reconciler gets triggered on ServiceMonitor CRD changes to ensure that any ProxyClasses, that
// define that a ServiceMonitor should be created, were set to invalid because the CRD did not exist get
// reconciled if the CRD is applied at a later point.
serviceMonitorFilter := handler.EnqueueRequestsFromMapFunc(proxyClassesWithServiceMonitor(mgr.GetClient(), opts.log))
err = builder.ControllerManagedBy(mgr).
For(&tsapi.ProxyClass{}).
Watches(&apiextensionsv1.CustomResourceDefinition{}, serviceMonitorFilter).
Complete(&ProxyClassReconciler{
Client: mgr.GetClient(),
recorder: eventRecorder,
@@ -414,6 +499,34 @@ func runReconcilers(opts reconcilerOpts) {
startlog.Fatalf("could not create Recorder reconciler: %v", err)
}
// ProxyGroup reconciler.
ownedByProxyGroupFilter := handler.EnqueueRequestForOwner(mgr.GetScheme(), mgr.GetRESTMapper(), &tsapi.ProxyGroup{})
proxyClassFilterForProxyGroup := handler.EnqueueRequestsFromMapFunc(proxyClassHandlerForProxyGroup(mgr.GetClient(), startlog))
err = builder.ControllerManagedBy(mgr).
For(&tsapi.ProxyGroup{}).
Watches(&appsv1.StatefulSet{}, ownedByProxyGroupFilter).
Watches(&corev1.ServiceAccount{}, ownedByProxyGroupFilter).
Watches(&corev1.Secret{}, ownedByProxyGroupFilter).
Watches(&rbacv1.Role{}, ownedByProxyGroupFilter).
Watches(&rbacv1.RoleBinding{}, ownedByProxyGroupFilter).
Watches(&tsapi.ProxyClass{}, proxyClassFilterForProxyGroup).
Complete(&ProxyGroupReconciler{
recorder: eventRecorder,
Client: mgr.GetClient(),
l: opts.log.Named("proxygroup-reconciler"),
clock: tstime.DefaultClock{},
tsClient: opts.tsClient,
tsNamespace: opts.tailscaleNamespace,
proxyImage: opts.proxyImage,
defaultTags: strings.Split(opts.proxyTags, ","),
tsFirewallMode: opts.proxyFirewallMode,
defaultProxyClass: opts.defaultProxyClass,
})
if err != nil {
startlog.Fatalf("could not create ProxyGroup reconciler: %v", err)
}
startlog.Infof("Startup complete, operator running, version: %s", version.Long())
if err := mgr.Start(signals.SetupSignalHandler()); err != nil {
startlog.Fatalf("could not start manager: %v", err)
@@ -454,10 +567,10 @@ type reconcilerOpts struct {
// Auto is usually the best choice, unless you want to explicitly set
// specific mode for debugging purposes.
proxyFirewallMode string
// proxyDefaultClass is the name of the ProxyClass to use as the default
// defaultProxyClass is the name of the ProxyClass to use as the default
// class for proxies that do not have a ProxyClass set.
// this is defined by an operator env variable.
proxyDefaultClass string
defaultProxyClass string
}
// enqueueAllIngressEgressProxySvcsinNS returns a reconcile request for each
@@ -646,6 +759,27 @@ func proxyClassHandlerForConnector(cl client.Client, logger *zap.SugaredLogger)
}
}
// proxyClassHandlerForConnector returns a handler that, for a given ProxyClass,
// returns a list of reconcile requests for all Connectors that have
// .spec.proxyClass set.
func proxyClassHandlerForProxyGroup(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
pgList := new(tsapi.ProxyGroupList)
if err := cl.List(ctx, pgList); err != nil {
logger.Debugf("error listing ProxyGroups for ProxyClass: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
proxyClassName := o.GetName()
for _, pg := range pgList.Items {
if pg.Spec.ProxyClass == proxyClassName {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&pg)})
}
}
return reqs
}
}
// serviceHandlerForIngress returns a handler for Service events for ingress
// reconciler that ensures that if the Service associated with an event is of
// interest to the reconciler, the associated Ingress(es) gets be reconciled.
@@ -687,6 +821,10 @@ func serviceHandlerForIngress(cl client.Client, logger *zap.SugaredLogger) handl
}
func serviceHandler(_ context.Context, o client.Object) []reconcile.Request {
if _, ok := o.GetAnnotations()[AnnotationProxyGroup]; ok {
// Do not reconcile Services for ProxyGroup.
return nil
}
if isManagedByType(o, "svc") {
// If this is a Service managed by a Service we want to enqueue its parent
return []reconcile.Request{{NamespacedName: parentFromObjectLabels(o)}}
@@ -712,3 +850,238 @@ func isMagicDNSName(name string) bool {
validMagicDNSName := regexp.MustCompile(`^[a-zA-Z0-9-]+\.[a-zA-Z0-9-]+\.ts\.net\.?$`)
return validMagicDNSName.MatchString(name)
}
// egressSvcsHandler returns accepts a Kubernetes object and returns a reconcile
// request for it , if the object is a Tailscale egress Service meant to be
// exposed on a ProxyGroup.
func egressSvcsHandler(_ context.Context, o client.Object) []reconcile.Request {
if !isEgressSvcForProxyGroup(o) {
return nil
}
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Namespace: o.GetNamespace(),
Name: o.GetName(),
},
},
}
}
// egressEpsHandler returns accepts an EndpointSlice and, if the EndpointSlice
// is for an egress service, returns a reconcile request for it.
func egressEpsHandler(_ context.Context, o client.Object) []reconcile.Request {
if typ := o.GetLabels()[labelSvcType]; typ != typeEgress {
return nil
}
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Namespace: o.GetNamespace(),
Name: o.GetName(),
},
},
}
}
// egressEpsFromEgressPods returns a Pod event handler that checks if Pod is a replica for a ProxyGroup and if it is,
// returns reconciler requests for all egress EndpointSlices for that ProxyGroup.
func egressEpsFromPGPods(cl client.Client, ns string) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
if v, ok := o.GetLabels()[LabelManaged]; !ok || v != "true" {
return nil
}
// TODO(irbekrm): for now this is good enough as all ProxyGroups are egress. Add a type check once we
// have ingress ProxyGroups.
if typ := o.GetLabels()[LabelParentType]; typ != "proxygroup" {
return nil
}
pg, ok := o.GetLabels()[LabelParentName]
if !ok {
return nil
}
return reconcileRequestsForPG(pg, cl, ns)
}
}
// egressEpsFromPGStateSecrets returns a Secret event handler that checks if Secret is a state Secret for a ProxyGroup and if it is,
// returns reconciler requests for all egress EndpointSlices for that ProxyGroup.
func egressEpsFromPGStateSecrets(cl client.Client, ns string) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
if v, ok := o.GetLabels()[LabelManaged]; !ok || v != "true" {
return nil
}
// TODO(irbekrm): for now this is good enough as all ProxyGroups are egress. Add a type check once we
// have ingress ProxyGroups.
if parentType := o.GetLabels()[LabelParentType]; parentType != "proxygroup" {
return nil
}
if secretType := o.GetLabels()[labelSecretType]; secretType != "state" {
return nil
}
pg, ok := o.GetLabels()[LabelParentName]
if !ok {
return nil
}
return reconcileRequestsForPG(pg, cl, ns)
}
}
// egressSvcFromEps is an event handler for EndpointSlices. If an EndpointSlice is for an egress ExternalName Service
// meant to be exposed on a ProxyGroup, returns a reconcile request for the Service.
func egressSvcFromEps(_ context.Context, o client.Object) []reconcile.Request {
if typ := o.GetLabels()[labelSvcType]; typ != typeEgress {
return nil
}
if v, ok := o.GetLabels()[LabelManaged]; !ok || v != "true" {
return nil
}
svcName, ok := o.GetLabels()[LabelParentName]
if !ok {
return nil
}
svcNs, ok := o.GetLabels()[LabelParentNamespace]
if !ok {
return nil
}
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Namespace: svcNs,
Name: svcName,
},
},
}
}
func reconcileRequestsForPG(pg string, cl client.Client, ns string) []reconcile.Request {
epsList := discoveryv1.EndpointSliceList{}
if err := cl.List(context.Background(), &epsList,
client.InNamespace(ns),
client.MatchingLabels(map[string]string{labelProxyGroup: pg})); err != nil {
return nil
}
reqs := make([]reconcile.Request, 0)
for _, ep := range epsList.Items {
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: ep.Namespace,
Name: ep.Name,
},
})
}
return reqs
}
// egressSvcsFromEgressProxyGroup is an event handler for egress ProxyGroups. It returns reconcile requests for all
// user-created ExternalName Services that should be exposed on this ProxyGroup.
func egressSvcsFromEgressProxyGroup(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
pg, ok := o.(*tsapi.ProxyGroup)
if !ok {
logger.Infof("[unexpected] ProxyGroup handler triggered for an object that is not a ProxyGroup")
return nil
}
if pg.Spec.Type != tsapi.ProxyGroupTypeEgress {
return nil
}
svcList := &corev1.ServiceList{}
if err := cl.List(context.Background(), svcList, client.MatchingFields{indexEgressProxyGroup: pg.Name}); err != nil {
logger.Infof("error listing Services: %v, skipping a reconcile for event on ProxyGroup %s", err, pg.Name)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, svc := range svcList.Items {
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: svc.Namespace,
Name: svc.Name,
},
})
}
return reqs
}
}
// epsFromExternalNameService is an event handler for ExternalName Services that define a Tailscale egress service that
// should be exposed on a ProxyGroup. It returns reconcile requests for EndpointSlices created for this Service.
func epsFromExternalNameService(cl client.Client, logger *zap.SugaredLogger, ns string) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
svc, ok := o.(*corev1.Service)
if !ok {
logger.Infof("[unexpected] Service handler triggered for an object that is not a Service")
return nil
}
if !isEgressSvcForProxyGroup(svc) {
return nil
}
epsList := &discoveryv1.EndpointSliceList{}
if err := cl.List(context.Background(), epsList, client.InNamespace(ns),
client.MatchingLabels(egressSvcChildResourceLabels(svc))); err != nil {
logger.Infof("error listing EndpointSlices: %v, skipping a reconcile for event on Service %s", err, svc.Name)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, eps := range epsList.Items {
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: eps.Namespace,
Name: eps.Name,
},
})
}
return reqs
}
}
// proxyClassesWithServiceMonitor returns an event handler that, given that the event is for the Prometheus
// ServiceMonitor CRD, returns all ProxyClasses that define that a ServiceMonitor should be created.
func proxyClassesWithServiceMonitor(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
crd, ok := o.(*apiextensionsv1.CustomResourceDefinition)
if !ok {
logger.Debugf("[unexpected] ServiceMonitor CRD handler received an object that is not a CustomResourceDefinition")
return nil
}
if crd.Name != serviceMonitorCRD {
logger.Debugf("[unexpected] ServiceMonitor CRD handler received an unexpected CRD %q", crd.Name)
return nil
}
pcl := &tsapi.ProxyClassList{}
if err := cl.List(ctx, pcl); err != nil {
logger.Debugf("[unexpected] error listing ProxyClasses: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, pc := range pcl.Items {
if pc.Spec.Metrics != nil && pc.Spec.Metrics.ServiceMonitor != nil && pc.Spec.Metrics.ServiceMonitor.Enable {
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{Namespace: pc.Namespace, Name: pc.Name},
})
}
}
return reqs
}
}
// crdTransformer gets called before a CRD is stored to c/r cache, it removes the CRD spec to reduce memory consumption.
func crdTransformer(log *zap.SugaredLogger) toolscache.TransformFunc {
return func(o any) (any, error) {
crd, ok := o.(*apiextensionsv1.CustomResourceDefinition)
if !ok {
log.Infof("[unexpected] CRD transformer called for a non-CRD type")
return crd, nil
}
crd.Spec = apiextensionsv1.CustomResourceDefinitionSpec{}
return crd, nil
}
}
// indexEgressServices adds a local index to a cached Tailscale egress Services meant to be exposed on a ProxyGroup. The
// index is used a list filter.
func indexEgressServices(o client.Object) []string {
if !isEgressSvcForProxyGroup(o) {
return nil
}
return []string{o.GetAnnotations()[AnnotationProxyGroup]}
}

View File

@@ -16,6 +16,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
@@ -432,6 +433,148 @@ func TestTailnetTargetIPAnnotation(t *testing.T) {
expectMissing[corev1.Secret](t, fc, "operator-ns", fullName)
}
func TestTailnetTargetIPAnnotation_IPCouldNotBeParsed(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
clock := tstest.NewClock(tstest.ClockOpts{})
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
clock: clock,
recorder: record.NewFakeRecorder(100),
}
tailnetTargetIP := "invalid-ip"
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationTailnetTargetIP: tailnetTargetIP,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
})
expectReconciled(t, sr, "default", "test")
t0 := conditionTime(clock)
want := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationTailnetTargetIP: tailnetTargetIP,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
Status: corev1.ServiceStatus{
Conditions: []metav1.Condition{{
Type: string(tsapi.ProxyReady),
Status: metav1.ConditionFalse,
LastTransitionTime: t0,
Reason: reasonProxyInvalid,
Message: `unable to provision proxy resources: invalid Service: invalid value of annotation tailscale.com/tailnet-ip: "invalid-ip" could not be parsed as a valid IP Address, error: ParseAddr("invalid-ip"): unable to parse IP`,
}},
},
}
expectEqual(t, fc, want, nil)
}
func TestTailnetTargetIPAnnotation_InvalidIP(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
clock := tstest.NewClock(tstest.ClockOpts{})
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
clock: clock,
recorder: record.NewFakeRecorder(100),
}
tailnetTargetIP := "999.999.999.999"
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationTailnetTargetIP: tailnetTargetIP,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
})
expectReconciled(t, sr, "default", "test")
t0 := conditionTime(clock)
want := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
AnnotationTailnetTargetIP: tailnetTargetIP,
},
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
Status: corev1.ServiceStatus{
Conditions: []metav1.Condition{{
Type: string(tsapi.ProxyReady),
Status: metav1.ConditionFalse,
LastTransitionTime: t0,
Reason: reasonProxyInvalid,
Message: `unable to provision proxy resources: invalid Service: invalid value of annotation tailscale.com/tailnet-ip: "999.999.999.999" could not be parsed as a valid IP Address, error: ParseAddr("999.999.999.999"): IPv4 field has value >255`,
}},
},
}
expectEqual(t, fc, want, nil)
}
func TestAnnotations(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
@@ -987,7 +1130,7 @@ func TestProxyClassForService(t *testing.T) {
AcceptRoutes: true,
},
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Labels: tsapi.Labels{"foo": "bar"},
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
@@ -1064,7 +1207,7 @@ func TestProxyClassForService(t *testing.T) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Status: metav1.ConditionTrue,
Type: string(tsapi.ProxyClassready),
Type: string(tsapi.ProxyClassReady),
ObservedGeneration: pc.Generation,
}}}
})
@@ -1246,7 +1389,7 @@ func TestTailscaledConfigfileHash(t *testing.T) {
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
confFileHash: "e09bededa0379920141cbd0b0dbdf9b8b66545877f9e8397423f5ce3e1ba439e",
confFileHash: "acf3467364b0a3ba9b8ee0dd772cb7c2f0bf585e288fa99b7fe4566009ed6041",
app: kubetypes.AppIngressProxy,
}
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
@@ -1257,7 +1400,7 @@ func TestTailscaledConfigfileHash(t *testing.T) {
mak.Set(&svc.Annotations, AnnotationHostname, "another-test")
})
o.hostname = "another-test"
o.confFileHash = "5d754cf55463135ee34aa9821f2fd8483b53eb0570c3740c84a086304f427684"
o.confFileHash = "d4cc13f09f55f4f6775689004f9a466723325b84d2b590692796bfe22aeaa389"
expectReconciled(t, sr, "default", "test")
expectEqual(t, fc, expectedSTS(t, fc, o), nil)
}
@@ -1487,6 +1630,72 @@ func Test_clusterDomainFromResolverConf(t *testing.T) {
})
}
}
func Test_authKeyRemoval(t *testing.T) {
fc := fake.NewFakeClient()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
// 1. A new Service that should be exposed via Tailscale gets created, a Secret with a config that contains auth
// key is generated.
clock := tstest.NewClock(tstest.ClockOpts{})
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
defaultTags: []string{"tag:k8s"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale",
},
logger: zl.Sugar(),
clock: clock,
}
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
})
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
hostname: "default-test",
clusterTargetIP: "10.20.30.40",
app: kubetypes.AppIngressProxy,
}
expectEqual(t, fc, expectedSecret(t, fc, opts), nil)
expectEqual(t, fc, expectedHeadlessService(shortName, "svc"), nil)
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
// 2. Apply update to the Secret that imitates the proxy setting device_id.
s := expectedSecret(t, fc, opts)
mustUpdate(t, fc, s.Namespace, s.Name, func(s *corev1.Secret) {
mak.Set(&s.Data, "device_id", []byte("dkkdi4CNTRL"))
})
// 3. Config should no longer contain auth key
expectReconciled(t, sr, "default", "test")
opts.shouldRemoveAuthKey = true
opts.secretExtraData = map[string][]byte{"device_id": []byte("dkkdi4CNTRL")}
expectEqual(t, fc, expectedSecret(t, fc, opts), nil)
}
func Test_externalNameService(t *testing.T) {
fc := fake.NewFakeClient()
@@ -1558,6 +1767,106 @@ func Test_externalNameService(t *testing.T) {
expectEqual(t, fc, expectedSTS(t, fc, opts), removeHashAnnotation)
}
func Test_metricsResourceCreation(t *testing.T) {
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{Name: "metrics", Generation: 1},
Spec: tsapi.ProxyClassSpec{},
Status: tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Status: metav1.ConditionTrue,
Type: string(tsapi.ProxyClassReady),
ObservedGeneration: 1,
}}},
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
Labels: map[string]string{LabelProxyClass: "metrics"},
},
Spec: corev1.ServiceSpec{
ClusterIP: "10.20.30.40",
Type: corev1.ServiceTypeLoadBalancer,
LoadBalancerClass: ptr.To("tailscale"),
},
}
crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc, svc).
WithStatusSubresource(pc).
Build()
ft := &fakeTSClient{}
zl, err := zap.NewDevelopment()
if err != nil {
t.Fatal(err)
}
clock := tstest.NewClock(tstest.ClockOpts{})
sr := &ServiceReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: ft,
operatorNamespace: "operator-ns",
},
logger: zl.Sugar(),
clock: clock,
}
expectReconciled(t, sr, "default", "test")
fullName, shortName := findGenName(t, fc, "default", "test", "svc")
opts := configOpts{
stsName: shortName,
secretName: fullName,
namespace: "default",
parentType: "svc",
tailscaleNamespace: "operator-ns",
hostname: "default-test",
namespaced: true,
proxyType: proxyTypeIngressService,
app: kubetypes.AppIngressProxy,
resourceVersion: "1",
}
// 1. Enable metrics- expect metrics Service to be created
mustUpdate(t, fc, "", "metrics", func(pc *tsapi.ProxyClass) {
pc.Spec = tsapi.ProxyClassSpec{Metrics: &tsapi.Metrics{Enable: true}}
})
expectReconciled(t, sr, "default", "test")
opts.enableMetrics = true
expectEqual(t, fc, expectedMetricsService(opts), nil)
// 2. Enable ServiceMonitor - should not error when there is no ServiceMonitor CRD in cluster
mustUpdate(t, fc, "", "metrics", func(pc *tsapi.ProxyClass) {
pc.Spec.Metrics.ServiceMonitor = &tsapi.ServiceMonitor{Enable: true}
})
expectReconciled(t, sr, "default", "test")
// 3. Create ServiceMonitor CRD and reconcile- ServiceMonitor should get created
mustCreate(t, fc, crd)
expectReconciled(t, sr, "default", "test")
expectEqualUnstructured(t, fc, expectedServiceMonitor(t, opts))
// 4. A change to ServiceMonitor config gets reflected in the ServiceMonitor resource
mustUpdate(t, fc, "", "metrics", func(pc *tsapi.ProxyClass) {
pc.Spec.Metrics.ServiceMonitor.Labels = tsapi.Labels{"foo": "bar"}
})
expectReconciled(t, sr, "default", "test")
opts.serviceMonitorLabels = tsapi.Labels{"foo": "bar"}
opts.resourceVersion = "2"
expectEqual(t, fc, expectedMetricsService(opts), nil)
expectEqualUnstructured(t, fc, expectedServiceMonitor(t, opts))
// 5. Disable metrics- expect metrics Service to be deleted
mustUpdate(t, fc, "", "metrics", func(pc *tsapi.ProxyClass) {
pc.Spec.Metrics = nil
})
expectReconciled(t, sr, "default", "test")
expectMissing[corev1.Service](t, fc, "operator-ns", metricsResourceName(opts.stsName))
// ServiceMonitor gets garbage collected when Service gets deleted (it has OwnerReference of the Service
// object). We cannot test this using the fake client.
}
func toFQDN(t *testing.T, s string) dnsname.FQDN {
t.Helper()
fqdn, err := dnsname.ToFQDN(s)

View File

@@ -311,7 +311,7 @@ func (h *apiserverProxy) addImpersonationHeadersAsRequired(r *http.Request) {
// Now add the impersonation headers that we want.
if err := addImpersonationHeaders(r, h.log); err != nil {
log.Printf("failed to add impersonation headers: " + err.Error())
log.Print("failed to add impersonation headers: ", err.Error())
}
}

View File

@@ -15,6 +15,7 @@ import (
dockerref "github.com/distribution/reference"
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
apivalidation "k8s.io/apimachinery/pkg/api/validation"
@@ -95,14 +96,14 @@ func (pcr *ProxyClassReconciler) Reconcile(ctx context.Context, req reconcile.Re
pcr.mu.Unlock()
oldPCStatus := pc.Status.DeepCopy()
if errs := pcr.validate(pc); errs != nil {
if errs := pcr.validate(ctx, pc); errs != nil {
msg := fmt.Sprintf(messageProxyClassInvalid, errs.ToAggregate().Error())
pcr.recorder.Event(pc, corev1.EventTypeWarning, reasonProxyClassInvalid, msg)
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, pc.Generation, pcr.clock, logger)
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionFalse, reasonProxyClassInvalid, msg, pc.Generation, pcr.clock, logger)
} else {
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionTrue, reasonProxyClassValid, reasonProxyClassValid, pc.Generation, pcr.clock, logger)
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionTrue, reasonProxyClassValid, reasonProxyClassValid, pc.Generation, pcr.clock, logger)
}
if !apiequality.Semantic.DeepEqual(oldPCStatus, pc.Status) {
if !apiequality.Semantic.DeepEqual(oldPCStatus, &pc.Status) {
if err := pcr.Client.Status().Update(ctx, pc); err != nil {
logger.Errorf("error updating ProxyClass status: %v", err)
return reconcile.Result{}, err
@@ -111,10 +112,10 @@ func (pcr *ProxyClassReconciler) Reconcile(ctx context.Context, req reconcile.Re
return reconcile.Result{}, nil
}
func (pcr *ProxyClassReconciler) validate(pc *tsapi.ProxyClass) (violations field.ErrorList) {
func (pcr *ProxyClassReconciler) validate(ctx context.Context, pc *tsapi.ProxyClass) (violations field.ErrorList) {
if sts := pc.Spec.StatefulSet; sts != nil {
if len(sts.Labels) > 0 {
if errs := metavalidation.ValidateLabels(sts.Labels, field.NewPath(".spec.statefulSet.labels")); errs != nil {
if errs := metavalidation.ValidateLabels(sts.Labels.Parse(), field.NewPath(".spec.statefulSet.labels")); errs != nil {
violations = append(violations, errs...)
}
}
@@ -125,7 +126,7 @@ func (pcr *ProxyClassReconciler) validate(pc *tsapi.ProxyClass) (violations fiel
}
if pod := sts.Pod; pod != nil {
if len(pod.Labels) > 0 {
if errs := metavalidation.ValidateLabels(pod.Labels, field.NewPath(".spec.statefulSet.pod.labels")); errs != nil {
if errs := metavalidation.ValidateLabels(pod.Labels.Parse(), field.NewPath(".spec.statefulSet.pod.labels")); errs != nil {
violations = append(violations, errs...)
}
}
@@ -160,9 +161,28 @@ func (pcr *ProxyClassReconciler) validate(pc *tsapi.ProxyClass) (violations fiel
violations = append(violations, field.TypeInvalid(field.NewPath("spec", "statefulSet", "pod", "tailscaleInitContainer", "image"), tc.Image, err.Error()))
}
}
if tc.Debug != nil {
violations = append(violations, field.TypeInvalid(field.NewPath("spec", "statefulSet", "pod", "tailscaleInitContainer", "debug"), tc.Debug, "debug settings cannot be configured on the init container"))
}
}
}
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.ServiceMonitor != nil && pc.Spec.Metrics.ServiceMonitor.Enable {
found, err := hasServiceMonitorCRD(ctx, pcr.Client)
if err != nil {
pcr.logger.Infof("[unexpected]: error retrieving %q CRD: %v", serviceMonitorCRD, err)
// best effort validation - don't error out here
} else if !found {
msg := fmt.Sprintf("ProxyClass defines that a ServiceMonitor custom resource should be created, but %q CRD was not found", serviceMonitorCRD)
violations = append(violations, field.TypeInvalid(field.NewPath("spec", "metrics", "serviceMonitor"), "enable", msg))
}
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.ServiceMonitor != nil && len(pc.Spec.Metrics.ServiceMonitor.Labels) > 0 {
if errs := metavalidation.ValidateLabels(pc.Spec.Metrics.ServiceMonitor.Labels.Parse(), field.NewPath(".spec.metrics.serviceMonitor.labels")); errs != nil {
violations = append(violations, errs...)
}
}
// We do not validate embedded fields (security context, resource
// requirements etc) as we inherit upstream validation for those fields.
// Invalid values would get rejected by upstream validations at apply
@@ -170,6 +190,16 @@ func (pcr *ProxyClassReconciler) validate(pc *tsapi.ProxyClass) (violations fiel
return violations
}
func hasServiceMonitorCRD(ctx context.Context, cl client.Client) (bool, error) {
sm := &apiextensionsv1.CustomResourceDefinition{}
if err := cl.Get(ctx, types.NamespacedName{Name: serviceMonitorCRD}, sm); apierrors.IsNotFound(err) {
return false, nil
} else if err != nil {
return false, err
}
return true, nil
}
// maybeCleanup removes tailscale.com finalizer and ensures that the ProxyClass
// is no longer counted towards k8s_proxyclass_resources.
func (pcr *ProxyClassReconciler) maybeCleanup(ctx context.Context, logger *zap.SugaredLogger, pc *tsapi.ProxyClass) error {

View File

@@ -8,10 +8,12 @@
package main
import (
"context"
"testing"
"time"
"go.uber.org/zap"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
@@ -34,10 +36,10 @@ func TestProxyClass(t *testing.T) {
},
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Labels: tsapi.Labels{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
Pod: &tsapi.Pod{
Labels: map[string]string{"foo": "bar", "xyz1234": "abc567"},
Labels: tsapi.Labels{"foo": "bar", "xyz1234": "abc567"},
Annotations: map[string]string{"foo.io/bar": "{'key': 'val1232'}"},
TailscaleContainer: &tsapi.Container{
Env: []tsapi.Env{{Name: "FOO", Value: "BAR"}},
@@ -69,7 +71,7 @@ func TestProxyClass(t *testing.T) {
// 1. A valid ProxyClass resource gets its status updated to Ready.
expectReconciled(t, pcr, "", "test")
pc.Status.Conditions = append(pc.Status.Conditions, metav1.Condition{
Type: string(tsapi.ProxyClassready),
Type: string(tsapi.ProxyClassReady),
Status: metav1.ConditionTrue,
Reason: reasonProxyClassValid,
Message: reasonProxyClassValid,
@@ -85,7 +87,7 @@ func TestProxyClass(t *testing.T) {
})
expectReconciled(t, pcr, "", "test")
msg := `ProxyClass is not valid: .spec.statefulSet.labels: Invalid value: "?!someVal": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
expectedEvent := "Warning ProxyClassInvalid ProxyClass is not valid: .spec.statefulSet.labels: Invalid value: \"?!someVal\": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')"
expectEvents(t, fr, []string{expectedEvent})
@@ -99,7 +101,7 @@ func TestProxyClass(t *testing.T) {
})
expectReconciled(t, pcr, "", "test")
msg = `ProxyClass is not valid: spec.statefulSet.pod.tailscaleContainer.image: Invalid value: "FOO bar": invalid reference format: repository name (library/FOO bar) must be lowercase`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
expectedEvent = `Warning ProxyClassInvalid ProxyClass is not valid: spec.statefulSet.pod.tailscaleContainer.image: Invalid value: "FOO bar": invalid reference format: repository name (library/FOO bar) must be lowercase`
expectEvents(t, fr, []string{expectedEvent})
@@ -118,7 +120,7 @@ func TestProxyClass(t *testing.T) {
})
expectReconciled(t, pcr, "", "test")
msg = `ProxyClass is not valid: spec.statefulSet.pod.tailscaleInitContainer.image: Invalid value: "FOO bar": invalid reference format: repository name (library/FOO bar) must be lowercase`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassready, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
expectedEvent = `Warning ProxyClassInvalid ProxyClass is not valid: spec.statefulSet.pod.tailscaleInitContainer.image: Invalid value: "FOO bar": invalid reference format: repository name (library/FOO bar) must be lowercase`
expectEvents(t, fr, []string{expectedEvent})
@@ -134,4 +136,95 @@ func TestProxyClass(t *testing.T) {
"Warning CustomTSEnvVar ProxyClass overrides the default value for EXPERIMENTAL_ALLOW_PROXYING_CLUSTER_TRAFFIC_VIA_INGRESS env var for tailscale container. Running with custom values for Tailscale env vars is not recommended and might break in the future."}
expectReconciled(t, pcr, "", "test")
expectEvents(t, fr, expectedEvents)
// 6. A ProxyClass with ServiceMonitor enabled and in a cluster that has not ServiceMonitor CRD is invalid
pc.Spec.Metrics = &tsapi.Metrics{Enable: true, ServiceMonitor: &tsapi.ServiceMonitor{Enable: true}}
mustUpdate(t, fc, "", "test", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec = pc.Spec
})
expectReconciled(t, pcr, "", "test")
msg = `ProxyClass is not valid: spec.metrics.serviceMonitor: Invalid value: "enable": ProxyClass defines that a ServiceMonitor custom resource should be created, but "servicemonitors.monitoring.coreos.com" CRD was not found`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
expectedEvent = "Warning ProxyClassInvalid " + msg
expectEvents(t, fr, []string{expectedEvent})
// 7. A ProxyClass with ServiceMonitor enabled and in a cluster that does have the ServiceMonitor CRD is valid
crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}}
mustCreate(t, fc, crd)
expectReconciled(t, pcr, "", "test")
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionTrue, reasonProxyClassValid, reasonProxyClassValid, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
// 7. A ProxyClass with invalid ServiceMonitor labels gets its status updated to Invalid with an error message.
pc.Spec.Metrics.ServiceMonitor.Labels = tsapi.Labels{"foo": "bar!"}
mustUpdate(t, fc, "", "test", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.Metrics.ServiceMonitor.Labels = pc.Spec.Metrics.ServiceMonitor.Labels
})
expectReconciled(t, pcr, "", "test")
msg = `ProxyClass is not valid: .spec.metrics.serviceMonitor.labels: Invalid value: "bar!": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')`
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionFalse, reasonProxyClassInvalid, msg, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
// 8. A ProxyClass with valid ServiceMonitor labels gets its status updated to Valid.
pc.Spec.Metrics.ServiceMonitor.Labels = tsapi.Labels{"foo": "bar", "xyz1234": "abc567", "empty": "", "onechar": "a"}
mustUpdate(t, fc, "", "test", func(proxyClass *tsapi.ProxyClass) {
proxyClass.Spec.Metrics.ServiceMonitor.Labels = pc.Spec.Metrics.ServiceMonitor.Labels
})
expectReconciled(t, pcr, "", "test")
tsoperator.SetProxyClassCondition(pc, tsapi.ProxyClassReady, metav1.ConditionTrue, reasonProxyClassValid, reasonProxyClassValid, 0, cl, zl.Sugar())
expectEqual(t, fc, pc, nil)
}
func TestValidateProxyClass(t *testing.T) {
for name, tc := range map[string]struct {
pc *tsapi.ProxyClass
valid bool
}{
"empty": {
valid: true,
pc: &tsapi.ProxyClass{},
},
"debug_enabled_for_main_container": {
valid: true,
pc: &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Pod: &tsapi.Pod{
TailscaleContainer: &tsapi.Container{
Debug: &tsapi.Debug{
Enable: true,
},
},
},
},
},
},
},
"debug_enabled_for_init_container": {
valid: false,
pc: &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Pod: &tsapi.Pod{
TailscaleInitContainer: &tsapi.Container{
Debug: &tsapi.Debug{
Enable: true,
},
},
},
},
},
},
},
} {
t.Run(name, func(t *testing.T) {
pcr := &ProxyClassReconciler{}
err := pcr.validate(context.Background(), tc.pc)
valid := err == nil
if valid != tc.valid {
t.Errorf("expected valid=%v, got valid=%v, err=%v", tc.valid, valid, err)
}
})
}
}

View File

@@ -0,0 +1,606 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"crypto/sha256"
"encoding/json"
"fmt"
"net/http"
"slices"
"strings"
"sync"
"github.com/pkg/errors"
"go.uber.org/zap"
xslices "golang.org/x/exp/slices"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg"
"tailscale.com/tstime"
"tailscale.com/types/ptr"
"tailscale.com/util/clientmetric"
"tailscale.com/util/mak"
"tailscale.com/util/set"
)
const (
reasonProxyGroupCreationFailed = "ProxyGroupCreationFailed"
reasonProxyGroupReady = "ProxyGroupReady"
reasonProxyGroupCreating = "ProxyGroupCreating"
reasonProxyGroupInvalid = "ProxyGroupInvalid"
// Copied from k8s.io/apiserver/pkg/registry/generic/registry/store.go@cccad306d649184bf2a0e319ba830c53f65c445c
optimisticLockErrorMsg = "the object has been modified; please apply your changes to the latest version and try again"
)
var (
gaugeEgressProxyGroupResources = clientmetric.NewGauge(kubetypes.MetricProxyGroupEgressCount)
gaugeIngressProxyGroupResources = clientmetric.NewGauge(kubetypes.MetricProxyGroupIngressCount)
)
// ProxyGroupReconciler ensures cluster resources for a ProxyGroup definition.
type ProxyGroupReconciler struct {
client.Client
l *zap.SugaredLogger
recorder record.EventRecorder
clock tstime.Clock
tsClient tsClient
// User-specified defaults from the helm installation.
tsNamespace string
proxyImage string
defaultTags []string
tsFirewallMode string
defaultProxyClass string
mu sync.Mutex // protects following
egressProxyGroups set.Slice[types.UID] // for egress proxygroups gauge
ingressProxyGroups set.Slice[types.UID] // for ingress proxygroups gauge
}
func (r *ProxyGroupReconciler) logger(name string) *zap.SugaredLogger {
return r.l.With("ProxyGroup", name)
}
func (r *ProxyGroupReconciler) Reconcile(ctx context.Context, req reconcile.Request) (_ reconcile.Result, err error) {
logger := r.logger(req.Name)
logger.Debugf("starting reconcile")
defer logger.Debugf("reconcile finished")
pg := new(tsapi.ProxyGroup)
err = r.Get(ctx, req.NamespacedName, pg)
if apierrors.IsNotFound(err) {
logger.Debugf("ProxyGroup not found, assuming it was deleted")
return reconcile.Result{}, nil
} else if err != nil {
return reconcile.Result{}, fmt.Errorf("failed to get tailscale.com ProxyGroup: %w", err)
}
if markedForDeletion(pg) {
logger.Debugf("ProxyGroup is being deleted, cleaning up resources")
ix := xslices.Index(pg.Finalizers, FinalizerName)
if ix < 0 {
logger.Debugf("no finalizer, nothing to do")
return reconcile.Result{}, nil
}
if done, err := r.maybeCleanup(ctx, pg); err != nil {
return reconcile.Result{}, err
} else if !done {
logger.Debugf("ProxyGroup resource cleanup not yet finished, will retry...")
return reconcile.Result{RequeueAfter: shortRequeue}, nil
}
pg.Finalizers = slices.Delete(pg.Finalizers, ix, ix+1)
if err := r.Update(ctx, pg); err != nil {
return reconcile.Result{}, err
}
return reconcile.Result{}, nil
}
oldPGStatus := pg.Status.DeepCopy()
setStatusReady := func(pg *tsapi.ProxyGroup, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, status, reason, message, pg.Generation, r.clock, logger)
if !apiequality.Semantic.DeepEqual(oldPGStatus, &pg.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := r.Client.Status().Update(ctx, pg); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
}
}
return reconcile.Result{}, err
}
if !slices.Contains(pg.Finalizers, FinalizerName) {
// This log line is printed exactly once during initial provisioning,
// because once the finalizer is in place this block gets skipped. So,
// this is a nice place to log that the high level, multi-reconcile
// operation is underway.
logger.Infof("ensuring ProxyGroup is set up")
pg.Finalizers = append(pg.Finalizers, FinalizerName)
if err = r.Update(ctx, pg); err != nil {
err = fmt.Errorf("error adding finalizer: %w", err)
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupCreationFailed, reasonProxyGroupCreationFailed)
}
}
if err = r.validate(pg); err != nil {
message := fmt.Sprintf("ProxyGroup is invalid: %s", err)
r.recorder.Eventf(pg, corev1.EventTypeWarning, reasonProxyGroupInvalid, message)
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupInvalid, message)
}
proxyClassName := r.defaultProxyClass
if pg.Spec.ProxyClass != "" {
proxyClassName = pg.Spec.ProxyClass
}
var proxyClass *tsapi.ProxyClass
if proxyClassName != "" {
proxyClass = new(tsapi.ProxyClass)
err := r.Get(ctx, types.NamespacedName{Name: proxyClassName}, proxyClass)
if apierrors.IsNotFound(err) {
err = nil
message := fmt.Sprintf("the ProxyGroup's ProxyClass %s does not (yet) exist", proxyClassName)
logger.Info(message)
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupCreating, message)
}
if err != nil {
err = fmt.Errorf("error getting ProxyGroup's ProxyClass %s: %s", proxyClassName, err)
r.recorder.Eventf(pg, corev1.EventTypeWarning, reasonProxyGroupCreationFailed, err.Error())
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupCreationFailed, err.Error())
}
if !tsoperator.ProxyClassIsReady(proxyClass) {
message := fmt.Sprintf("the ProxyGroup's ProxyClass %s is not yet in a ready state, waiting...", proxyClassName)
logger.Info(message)
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupCreating, message)
}
}
if err = r.maybeProvision(ctx, pg, proxyClass); err != nil {
reason := reasonProxyGroupCreationFailed
msg := fmt.Sprintf("error provisioning ProxyGroup resources: %s", err)
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
reason = reasonProxyGroupCreating
msg = fmt.Sprintf("optimistic lock error, retrying: %s", err)
err = nil
logger.Info(msg)
} else {
r.recorder.Eventf(pg, corev1.EventTypeWarning, reason, msg)
}
return setStatusReady(pg, metav1.ConditionFalse, reason, msg)
}
desiredReplicas := int(pgReplicas(pg))
if len(pg.Status.Devices) < desiredReplicas {
message := fmt.Sprintf("%d/%d ProxyGroup pods running", len(pg.Status.Devices), desiredReplicas)
logger.Debug(message)
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupCreating, message)
}
if len(pg.Status.Devices) > desiredReplicas {
message := fmt.Sprintf("waiting for %d ProxyGroup pods to shut down", len(pg.Status.Devices)-desiredReplicas)
logger.Debug(message)
return setStatusReady(pg, metav1.ConditionFalse, reasonProxyGroupCreating, message)
}
logger.Info("ProxyGroup resources synced")
return setStatusReady(pg, metav1.ConditionTrue, reasonProxyGroupReady, reasonProxyGroupReady)
}
func (r *ProxyGroupReconciler) maybeProvision(ctx context.Context, pg *tsapi.ProxyGroup, proxyClass *tsapi.ProxyClass) error {
logger := r.logger(pg.Name)
r.mu.Lock()
r.ensureAddedToGaugeForProxyGroup(pg)
r.mu.Unlock()
cfgHash, err := r.ensureConfigSecretsCreated(ctx, pg, proxyClass)
if err != nil {
return fmt.Errorf("error provisioning config Secrets: %w", err)
}
// State secrets are precreated so we can use the ProxyGroup CR as their owner ref.
stateSecrets := pgStateSecrets(pg, r.tsNamespace)
for _, sec := range stateSecrets {
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, sec, func(s *corev1.Secret) {
s.ObjectMeta.Labels = sec.ObjectMeta.Labels
s.ObjectMeta.Annotations = sec.ObjectMeta.Annotations
s.ObjectMeta.OwnerReferences = sec.ObjectMeta.OwnerReferences
}); err != nil {
return fmt.Errorf("error provisioning state Secrets: %w", err)
}
}
sa := pgServiceAccount(pg, r.tsNamespace)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, sa, func(s *corev1.ServiceAccount) {
s.ObjectMeta.Labels = sa.ObjectMeta.Labels
s.ObjectMeta.Annotations = sa.ObjectMeta.Annotations
s.ObjectMeta.OwnerReferences = sa.ObjectMeta.OwnerReferences
}); err != nil {
return fmt.Errorf("error provisioning ServiceAccount: %w", err)
}
role := pgRole(pg, r.tsNamespace)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, role, func(r *rbacv1.Role) {
r.ObjectMeta.Labels = role.ObjectMeta.Labels
r.ObjectMeta.Annotations = role.ObjectMeta.Annotations
r.ObjectMeta.OwnerReferences = role.ObjectMeta.OwnerReferences
r.Rules = role.Rules
}); err != nil {
return fmt.Errorf("error provisioning Role: %w", err)
}
roleBinding := pgRoleBinding(pg, r.tsNamespace)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, roleBinding, func(r *rbacv1.RoleBinding) {
r.ObjectMeta.Labels = roleBinding.ObjectMeta.Labels
r.ObjectMeta.Annotations = roleBinding.ObjectMeta.Annotations
r.ObjectMeta.OwnerReferences = roleBinding.ObjectMeta.OwnerReferences
r.RoleRef = roleBinding.RoleRef
r.Subjects = roleBinding.Subjects
}); err != nil {
return fmt.Errorf("error provisioning RoleBinding: %w", err)
}
if pg.Spec.Type == tsapi.ProxyGroupTypeEgress {
cm := pgEgressCM(pg, r.tsNamespace)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, cm, func(existing *corev1.ConfigMap) {
existing.ObjectMeta.Labels = cm.ObjectMeta.Labels
existing.ObjectMeta.OwnerReferences = cm.ObjectMeta.OwnerReferences
}); err != nil {
return fmt.Errorf("error provisioning ConfigMap: %w", err)
}
}
ss, err := pgStatefulSet(pg, r.tsNamespace, r.proxyImage, r.tsFirewallMode, cfgHash)
if err != nil {
return fmt.Errorf("error generating StatefulSet spec: %w", err)
}
ss = applyProxyClassToStatefulSet(proxyClass, ss, nil, logger)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, ss, func(s *appsv1.StatefulSet) {
s.ObjectMeta.Labels = ss.ObjectMeta.Labels
s.ObjectMeta.Annotations = ss.ObjectMeta.Annotations
s.ObjectMeta.OwnerReferences = ss.ObjectMeta.OwnerReferences
s.Spec = ss.Spec
}); err != nil {
return fmt.Errorf("error provisioning StatefulSet: %w", err)
}
mo := &metricsOpts{
tsNamespace: r.tsNamespace,
proxyStsName: pg.Name,
proxyLabels: pgLabels(pg.Name, nil),
proxyType: "proxygroup",
}
if err := reconcileMetricsResources(ctx, logger, mo, proxyClass, r.Client); err != nil {
return fmt.Errorf("error reconciling metrics resources: %w", err)
}
if err := r.cleanupDanglingResources(ctx, pg); err != nil {
return fmt.Errorf("error cleaning up dangling resources: %w", err)
}
devices, err := r.getDeviceInfo(ctx, pg)
if err != nil {
return fmt.Errorf("failed to get device info: %w", err)
}
pg.Status.Devices = devices
return nil
}
// cleanupDanglingResources ensures we don't leak config secrets, state secrets, and
// tailnet devices when the number of replicas specified is reduced.
func (r *ProxyGroupReconciler) cleanupDanglingResources(ctx context.Context, pg *tsapi.ProxyGroup) error {
logger := r.logger(pg.Name)
metadata, err := r.getNodeMetadata(ctx, pg)
if err != nil {
return err
}
for _, m := range metadata {
if m.ordinal+1 <= int(pgReplicas(pg)) {
continue
}
// Dangling resource, delete the config + state Secrets, as well as
// deleting the device from the tailnet.
if err := r.deleteTailnetDevice(ctx, m.tsID, logger); err != nil {
return err
}
if err := r.Delete(ctx, m.stateSecret); err != nil {
if !apierrors.IsNotFound(err) {
return fmt.Errorf("error deleting state Secret %s: %w", m.stateSecret.Name, err)
}
}
configSecret := m.stateSecret.DeepCopy()
configSecret.Name += "-config"
if err := r.Delete(ctx, configSecret); err != nil {
if !apierrors.IsNotFound(err) {
return fmt.Errorf("error deleting config Secret %s: %w", configSecret.Name, err)
}
}
}
return nil
}
// maybeCleanup just deletes the device from the tailnet. All the kubernetes
// resources linked to a ProxyGroup will get cleaned up via owner references
// (which we can use because they are all in the same namespace).
func (r *ProxyGroupReconciler) maybeCleanup(ctx context.Context, pg *tsapi.ProxyGroup) (bool, error) {
logger := r.logger(pg.Name)
metadata, err := r.getNodeMetadata(ctx, pg)
if err != nil {
return false, err
}
for _, m := range metadata {
if err := r.deleteTailnetDevice(ctx, m.tsID, logger); err != nil {
return false, err
}
}
mo := &metricsOpts{
proxyLabels: pgLabels(pg.Name, nil),
tsNamespace: r.tsNamespace,
proxyType: "proxygroup"}
if err := maybeCleanupMetricsResources(ctx, mo, r.Client); err != nil {
return false, fmt.Errorf("error cleaning up metrics resources: %w", err)
}
logger.Infof("cleaned up ProxyGroup resources")
r.mu.Lock()
r.ensureRemovedFromGaugeForProxyGroup(pg)
r.mu.Unlock()
return true, nil
}
func (r *ProxyGroupReconciler) deleteTailnetDevice(ctx context.Context, id tailcfg.StableNodeID, logger *zap.SugaredLogger) error {
logger.Debugf("deleting device %s from control", string(id))
if err := r.tsClient.DeleteDevice(ctx, string(id)); err != nil {
errResp := &tailscale.ErrResponse{}
if ok := errors.As(err, errResp); ok && errResp.Status == http.StatusNotFound {
logger.Debugf("device %s not found, likely because it has already been deleted from control", string(id))
} else {
return fmt.Errorf("error deleting device: %w", err)
}
} else {
logger.Debugf("device %s deleted from control", string(id))
}
return nil
}
func (r *ProxyGroupReconciler) ensureConfigSecretsCreated(ctx context.Context, pg *tsapi.ProxyGroup, proxyClass *tsapi.ProxyClass) (hash string, err error) {
logger := r.logger(pg.Name)
var configSHA256Sum string
for i := range pgReplicas(pg) {
cfgSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-%d-config", pg.Name, i),
Namespace: r.tsNamespace,
Labels: pgSecretLabels(pg.Name, "config"),
OwnerReferences: pgOwnerReference(pg),
},
}
var existingCfgSecret *corev1.Secret // unmodified copy of secret
if err := r.Get(ctx, client.ObjectKeyFromObject(cfgSecret), cfgSecret); err == nil {
logger.Debugf("secret %s/%s already exists", cfgSecret.GetNamespace(), cfgSecret.GetName())
existingCfgSecret = cfgSecret.DeepCopy()
} else if !apierrors.IsNotFound(err) {
return "", err
}
var authKey string
if existingCfgSecret == nil {
logger.Debugf("creating authkey for new ProxyGroup proxy")
tags := pg.Spec.Tags.Stringify()
if len(tags) == 0 {
tags = r.defaultTags
}
authKey, err = newAuthKey(ctx, r.tsClient, tags)
if err != nil {
return "", err
}
}
configs, err := pgTailscaledConfig(pg, proxyClass, i, authKey, existingCfgSecret)
if err != nil {
return "", fmt.Errorf("error creating tailscaled config: %w", err)
}
for cap, cfg := range configs {
cfgJSON, err := json.Marshal(cfg)
if err != nil {
return "", fmt.Errorf("error marshalling tailscaled config: %w", err)
}
mak.Set(&cfgSecret.StringData, tsoperator.TailscaledConfigFileName(cap), string(cfgJSON))
}
// The config sha256 sum is a value for a hash annotation used to trigger
// pod restarts when tailscaled config changes. Any config changes apply
// to all replicas, so it is sufficient to only hash the config for the
// first replica.
//
// In future, we're aiming to eliminate restarts altogether and have
// pods dynamically reload their config when it changes.
if i == 0 {
sum := sha256.New()
for _, cfg := range configs {
// Zero out the auth key so it doesn't affect the sha256 hash when we
// remove it from the config after the pods have all authed. Otherwise
// all the pods will need to restart immediately after authing.
cfg.AuthKey = nil
b, err := json.Marshal(cfg)
if err != nil {
return "", err
}
if _, err := sum.Write(b); err != nil {
return "", err
}
}
configSHA256Sum = fmt.Sprintf("%x", sum.Sum(nil))
}
if existingCfgSecret != nil {
logger.Debugf("patching the existing ProxyGroup config Secret %s", cfgSecret.Name)
if err := r.Patch(ctx, cfgSecret, client.MergeFrom(existingCfgSecret)); err != nil {
return "", err
}
} else {
logger.Debugf("creating a new config Secret %s for the ProxyGroup", cfgSecret.Name)
if err := r.Create(ctx, cfgSecret); err != nil {
return "", err
}
}
}
return configSHA256Sum, nil
}
// ensureAddedToGaugeForProxyGroup ensures the gauge metric for the ProxyGroup resource is updated when the ProxyGroup
// is created. r.mu must be held.
func (r *ProxyGroupReconciler) ensureAddedToGaugeForProxyGroup(pg *tsapi.ProxyGroup) {
switch pg.Spec.Type {
case tsapi.ProxyGroupTypeEgress:
r.egressProxyGroups.Add(pg.UID)
case tsapi.ProxyGroupTypeIngress:
r.ingressProxyGroups.Add(pg.UID)
}
gaugeEgressProxyGroupResources.Set(int64(r.egressProxyGroups.Len()))
gaugeIngressProxyGroupResources.Set(int64(r.ingressProxyGroups.Len()))
}
// ensureRemovedFromGaugeForProxyGroup ensures the gauge metric for the ProxyGroup resource type is updated when the
// ProxyGroup is deleted. r.mu must be held.
func (r *ProxyGroupReconciler) ensureRemovedFromGaugeForProxyGroup(pg *tsapi.ProxyGroup) {
switch pg.Spec.Type {
case tsapi.ProxyGroupTypeEgress:
r.egressProxyGroups.Remove(pg.UID)
case tsapi.ProxyGroupTypeIngress:
r.ingressProxyGroups.Remove(pg.UID)
}
gaugeEgressProxyGroupResources.Set(int64(r.egressProxyGroups.Len()))
gaugeIngressProxyGroupResources.Set(int64(r.ingressProxyGroups.Len()))
}
func pgTailscaledConfig(pg *tsapi.ProxyGroup, class *tsapi.ProxyClass, idx int32, authKey string, oldSecret *corev1.Secret) (tailscaledConfigs, error) {
conf := &ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
AcceptRoutes: "false", // AcceptRoutes defaults to true
Locked: "false",
Hostname: ptr.To(fmt.Sprintf("%s-%d", pg.Name, idx)),
}
if pg.Spec.HostnamePrefix != "" {
conf.Hostname = ptr.To(fmt.Sprintf("%s-%d", pg.Spec.HostnamePrefix, idx))
}
if shouldAcceptRoutes(class) {
conf.AcceptRoutes = "true"
}
deviceAuthed := false
for _, d := range pg.Status.Devices {
if d.Hostname == *conf.Hostname {
deviceAuthed = true
break
}
}
if authKey != "" {
conf.AuthKey = &authKey
} else if !deviceAuthed {
key, err := authKeyFromSecret(oldSecret)
if err != nil {
return nil, fmt.Errorf("error retrieving auth key from Secret: %w", err)
}
conf.AuthKey = key
}
capVerConfigs := make(map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha)
capVerConfigs[106] = *conf
return capVerConfigs, nil
}
func (r *ProxyGroupReconciler) validate(_ *tsapi.ProxyGroup) error {
return nil
}
// getNodeMetadata gets metadata for all the pods owned by this ProxyGroup by
// querying their state Secrets. It may not return the same number of items as
// specified in the ProxyGroup spec if e.g. it is getting scaled up or down, or
// some pods have failed to write state.
func (r *ProxyGroupReconciler) getNodeMetadata(ctx context.Context, pg *tsapi.ProxyGroup) (metadata []nodeMetadata, _ error) {
// List all state secrets owned by this ProxyGroup.
secrets := &corev1.SecretList{}
if err := r.List(ctx, secrets, client.InNamespace(r.tsNamespace), client.MatchingLabels(pgSecretLabels(pg.Name, "state"))); err != nil {
return nil, fmt.Errorf("failed to list state Secrets: %w", err)
}
for _, secret := range secrets.Items {
var ordinal int
if _, err := fmt.Sscanf(secret.Name, pg.Name+"-%d", &ordinal); err != nil {
return nil, fmt.Errorf("unexpected secret %s was labelled as owned by the ProxyGroup %s: %w", secret.Name, pg.Name, err)
}
id, dnsName, ok, err := getNodeMetadata(ctx, &secret)
if err != nil {
return nil, err
}
if !ok {
continue
}
metadata = append(metadata, nodeMetadata{
ordinal: ordinal,
stateSecret: &secret,
tsID: id,
dnsName: dnsName,
})
}
return metadata, nil
}
func (r *ProxyGroupReconciler) getDeviceInfo(ctx context.Context, pg *tsapi.ProxyGroup) (devices []tsapi.TailnetDevice, _ error) {
metadata, err := r.getNodeMetadata(ctx, pg)
if err != nil {
return nil, err
}
for _, m := range metadata {
device, ok, err := getDeviceInfo(ctx, r.tsClient, m.stateSecret)
if err != nil {
return nil, err
}
if !ok {
continue
}
devices = append(devices, tsapi.TailnetDevice{
Hostname: device.Hostname,
TailnetIPs: device.TailnetIPs,
})
}
return devices, nil
}
type nodeMetadata struct {
ordinal int
stateSecret *corev1.Secret
tsID tailcfg.StableNodeID
dnsName string
}

View File

@@ -0,0 +1,304 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"fmt"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/yaml"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/egressservices"
"tailscale.com/kube/kubetypes"
"tailscale.com/types/ptr"
)
// Returns the base StatefulSet definition for a ProxyGroup. A ProxyClass may be
// applied over the top after.
func pgStatefulSet(pg *tsapi.ProxyGroup, namespace, image, tsFirewallMode, cfgHash string) (*appsv1.StatefulSet, error) {
ss := new(appsv1.StatefulSet)
if err := yaml.Unmarshal(proxyYaml, &ss); err != nil {
return nil, fmt.Errorf("failed to unmarshal proxy spec: %w", err)
}
// Validate some base assumptions.
if len(ss.Spec.Template.Spec.InitContainers) != 1 {
return nil, fmt.Errorf("[unexpected] base proxy config had %d init containers instead of 1", len(ss.Spec.Template.Spec.InitContainers))
}
if len(ss.Spec.Template.Spec.Containers) != 1 {
return nil, fmt.Errorf("[unexpected] base proxy config had %d containers instead of 1", len(ss.Spec.Template.Spec.Containers))
}
// StatefulSet config.
ss.ObjectMeta = metav1.ObjectMeta{
Name: pg.Name,
Namespace: namespace,
Labels: pgLabels(pg.Name, nil),
OwnerReferences: pgOwnerReference(pg),
}
ss.Spec.Replicas = ptr.To(pgReplicas(pg))
ss.Spec.Selector = &metav1.LabelSelector{
MatchLabels: pgLabels(pg.Name, nil),
}
// Template config.
tmpl := &ss.Spec.Template
tmpl.ObjectMeta = metav1.ObjectMeta{
Name: pg.Name,
Namespace: namespace,
Labels: pgLabels(pg.Name, nil),
DeletionGracePeriodSeconds: ptr.To[int64](10),
Annotations: map[string]string{
podAnnotationLastSetConfigFileHash: cfgHash,
},
}
tmpl.Spec.ServiceAccountName = pg.Name
tmpl.Spec.InitContainers[0].Image = image
tmpl.Spec.Volumes = func() []corev1.Volume {
var volumes []corev1.Volume
for i := range pgReplicas(pg) {
volumes = append(volumes, corev1.Volume{
Name: fmt.Sprintf("tailscaledconfig-%d", i),
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: fmt.Sprintf("%s-%d-config", pg.Name, i),
},
},
})
}
if pg.Spec.Type == tsapi.ProxyGroupTypeEgress {
volumes = append(volumes, corev1.Volume{
Name: pgEgressCMName(pg.Name),
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: pgEgressCMName(pg.Name),
},
},
},
})
}
return volumes
}()
// Main container config.
c := &ss.Spec.Template.Spec.Containers[0]
c.Image = image
c.VolumeMounts = func() []corev1.VolumeMount {
var mounts []corev1.VolumeMount
// TODO(tomhjp): Read config directly from the secret instead. The
// mounts change on scaling up/down which causes unnecessary restarts
// for pods that haven't meaningfully changed.
for i := range pgReplicas(pg) {
mounts = append(mounts, corev1.VolumeMount{
Name: fmt.Sprintf("tailscaledconfig-%d", i),
ReadOnly: true,
MountPath: fmt.Sprintf("/etc/tsconfig/%s-%d", pg.Name, i),
})
}
if pg.Spec.Type == tsapi.ProxyGroupTypeEgress {
mounts = append(mounts, corev1.VolumeMount{
Name: pgEgressCMName(pg.Name),
MountPath: "/etc/proxies",
ReadOnly: true,
})
}
return mounts
}()
c.Env = func() []corev1.EnvVar {
envs := []corev1.EnvVar{
{
// TODO(irbekrm): verify that .status.podIPs are always set, else read in .status.podIP as well.
Name: "POD_IPS", // this will be a comma separate list i.e 10.136.0.6,2600:1900:4011:161:0:e:0:6
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "status.podIPs",
},
},
},
{
Name: "TS_KUBE_SECRET",
Value: "$(POD_NAME)",
},
{
Name: "TS_STATE",
Value: "kube:$(POD_NAME)",
},
{
Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR",
Value: "/etc/tsconfig/$(POD_NAME)",
},
}
if tsFirewallMode != "" {
envs = append(envs, corev1.EnvVar{
Name: "TS_DEBUG_FIREWALL_MODE",
Value: tsFirewallMode,
})
}
if pg.Spec.Type == tsapi.ProxyGroupTypeEgress {
envs = append(envs, corev1.EnvVar{
Name: "TS_EGRESS_SERVICES_CONFIG_PATH",
Value: fmt.Sprintf("/etc/proxies/%s", egressservices.KeyEgressServices),
},
corev1.EnvVar{
Name: "TS_INTERNAL_APP",
Value: kubetypes.AppProxyGroupEgress,
},
)
} else {
envs = append(envs, corev1.EnvVar{
Name: "TS_INTERNAL_APP",
Value: kubetypes.AppProxyGroupIngress,
})
}
return append(c.Env, envs...)
}()
return ss, nil
}
func pgServiceAccount(pg *tsapi.ProxyGroup, namespace string) *corev1.ServiceAccount {
return &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: pg.Name,
Namespace: namespace,
Labels: pgLabels(pg.Name, nil),
OwnerReferences: pgOwnerReference(pg),
},
}
}
func pgRole(pg *tsapi.ProxyGroup, namespace string) *rbacv1.Role {
return &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: pg.Name,
Namespace: namespace,
Labels: pgLabels(pg.Name, nil),
OwnerReferences: pgOwnerReference(pg),
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{
"get",
"patch",
"update",
},
ResourceNames: func() (secrets []string) {
for i := range pgReplicas(pg) {
secrets = append(secrets,
fmt.Sprintf("%s-%d-config", pg.Name, i), // Config with auth key.
fmt.Sprintf("%s-%d", pg.Name, i), // State.
)
}
return secrets
}(),
},
{
APIGroups: []string{""},
Resources: []string{"events"},
Verbs: []string{
"create",
"patch",
"get",
},
},
},
}
}
func pgRoleBinding(pg *tsapi.ProxyGroup, namespace string) *rbacv1.RoleBinding {
return &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: pg.Name,
Namespace: namespace,
Labels: pgLabels(pg.Name, nil),
OwnerReferences: pgOwnerReference(pg),
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: pg.Name,
Namespace: namespace,
},
},
RoleRef: rbacv1.RoleRef{
Kind: "Role",
Name: pg.Name,
},
}
}
func pgStateSecrets(pg *tsapi.ProxyGroup, namespace string) (secrets []*corev1.Secret) {
for i := range pgReplicas(pg) {
secrets = append(secrets, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-%d", pg.Name, i),
Namespace: namespace,
Labels: pgSecretLabels(pg.Name, "state"),
OwnerReferences: pgOwnerReference(pg),
},
})
}
return secrets
}
func pgEgressCM(pg *tsapi.ProxyGroup, namespace string) *corev1.ConfigMap {
return &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: pgEgressCMName(pg.Name),
Namespace: namespace,
Labels: pgLabels(pg.Name, nil),
OwnerReferences: pgOwnerReference(pg),
},
}
}
func pgSecretLabels(pgName, typ string) map[string]string {
return pgLabels(pgName, map[string]string{
labelSecretType: typ, // "config" or "state".
})
}
func pgLabels(pgName string, customLabels map[string]string) map[string]string {
l := make(map[string]string, len(customLabels)+3)
for k, v := range customLabels {
l[k] = v
}
l[LabelManaged] = "true"
l[LabelParentType] = "proxygroup"
l[LabelParentName] = pgName
return l
}
func pgOwnerReference(owner *tsapi.ProxyGroup) []metav1.OwnerReference {
return []metav1.OwnerReference{*metav1.NewControllerRef(owner, tsapi.SchemeGroupVersion.WithKind("ProxyGroup"))}
}
func pgReplicas(pg *tsapi.ProxyGroup) int32 {
if pg.Spec.Replicas != nil {
return *pg.Spec.Replicas
}
return 2
}
func pgEgressCMName(pg string) string {
return fmt.Sprintf("%s-egress-config", pg)
}

View File

@@ -0,0 +1,448 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build !plan9
package main
import (
"context"
"encoding/json"
"fmt"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/zap"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"tailscale.com/client/tailscale"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/egressservices"
"tailscale.com/kube/kubetypes"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
)
const testProxyImage = "tailscale/tailscale:test"
var defaultProxyClassAnnotations = map[string]string{
"some-annotation": "from-the-proxy-class",
}
func TestProxyGroup(t *testing.T) {
const initialCfgHash = "6632726be70cf224049580deb4d317bba065915b5fd415461d60ed621c91b196"
pc := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{
Name: "default-pc",
},
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Annotations: defaultProxyClassAnnotations,
},
},
}
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Finalizers: []string{"tailscale.com/finalizer"},
},
Spec: tsapi.ProxyGroupSpec{
Type: tsapi.ProxyGroupTypeEgress,
},
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pg, pc).
WithStatusSubresource(pg, pc).
Build()
tsClient := &fakeTSClient{}
zl, _ := zap.NewDevelopment()
fr := record.NewFakeRecorder(1)
cl := tstest.NewClock(tstest.ClockOpts{})
reconciler := &ProxyGroupReconciler{
tsNamespace: tsNamespace,
proxyImage: testProxyImage,
defaultTags: []string{"tag:test-tag"},
tsFirewallMode: "auto",
defaultProxyClass: "default-pc",
Client: fc,
tsClient: tsClient,
recorder: fr,
l: zl.Sugar(),
clock: cl,
}
crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}}
opts := configOpts{
proxyType: "proxygroup",
stsName: pg.Name,
parentType: "proxygroup",
tailscaleNamespace: "tailscale",
resourceVersion: "1",
}
t.Run("proxyclass_not_ready", func(t *testing.T) {
expectReconciled(t, reconciler, "", pg.Name)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "the ProxyGroup's ProxyClass default-pc is not yet in a ready state, waiting...", 0, cl, zl.Sugar())
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, false, "")
})
t.Run("observe_ProxyGroupCreating_status_reason", func(t *testing.T) {
pc.Status = tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Type: string(tsapi.ProxyClassReady),
Status: metav1.ConditionTrue,
Reason: reasonProxyClassValid,
Message: reasonProxyClassValid,
LastTransitionTime: metav1.Time{Time: cl.Now().Truncate(time.Second)},
}},
}
if err := fc.Status().Update(context.Background(), pc); err != nil {
t.Fatal(err)
}
expectReconciled(t, reconciler, "", pg.Name)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "0/2 ProxyGroup pods running", 0, cl, zl.Sugar())
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, true, initialCfgHash)
if expected := 1; reconciler.egressProxyGroups.Len() != expected {
t.Fatalf("expected %d egress ProxyGroups, got %d", expected, reconciler.egressProxyGroups.Len())
}
expectProxyGroupResources(t, fc, pg, true, initialCfgHash)
keyReq := tailscale.KeyCapabilities{
Devices: tailscale.KeyDeviceCapabilities{
Create: tailscale.KeyDeviceCreateCapabilities{
Reusable: false,
Ephemeral: false,
Preauthorized: true,
Tags: []string{"tag:test-tag"},
},
},
}
if diff := cmp.Diff(tsClient.KeyRequests(), []tailscale.KeyCapabilities{keyReq, keyReq}); diff != "" {
t.Fatalf("unexpected secrets (-got +want):\n%s", diff)
}
})
t.Run("simulate_successful_device_auth", func(t *testing.T) {
addNodeIDToStateSecrets(t, fc, pg)
expectReconciled(t, reconciler, "", pg.Name)
pg.Status.Devices = []tsapi.TailnetDevice{
{
Hostname: "hostname-nodeid-0",
TailnetIPs: []string{"1.2.3.4", "::1"},
},
{
Hostname: "hostname-nodeid-1",
TailnetIPs: []string{"1.2.3.4", "::1"},
},
}
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionTrue, reasonProxyGroupReady, reasonProxyGroupReady, 0, cl, zl.Sugar())
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, true, initialCfgHash)
})
t.Run("scale_up_to_3", func(t *testing.T) {
pg.Spec.Replicas = ptr.To[int32](3)
mustUpdate(t, fc, "", pg.Name, func(p *tsapi.ProxyGroup) {
p.Spec = pg.Spec
})
expectReconciled(t, reconciler, "", pg.Name)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionFalse, reasonProxyGroupCreating, "2/3 ProxyGroup pods running", 0, cl, zl.Sugar())
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, true, initialCfgHash)
addNodeIDToStateSecrets(t, fc, pg)
expectReconciled(t, reconciler, "", pg.Name)
tsoperator.SetProxyGroupCondition(pg, tsapi.ProxyGroupReady, metav1.ConditionTrue, reasonProxyGroupReady, reasonProxyGroupReady, 0, cl, zl.Sugar())
pg.Status.Devices = append(pg.Status.Devices, tsapi.TailnetDevice{
Hostname: "hostname-nodeid-2",
TailnetIPs: []string{"1.2.3.4", "::1"},
})
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, true, initialCfgHash)
})
t.Run("scale_down_to_1", func(t *testing.T) {
pg.Spec.Replicas = ptr.To[int32](1)
mustUpdate(t, fc, "", pg.Name, func(p *tsapi.ProxyGroup) {
p.Spec = pg.Spec
})
expectReconciled(t, reconciler, "", pg.Name)
pg.Status.Devices = pg.Status.Devices[:1] // truncate to only the first device.
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, true, initialCfgHash)
})
t.Run("trigger_config_change_and_observe_new_config_hash", func(t *testing.T) {
pc.Spec.TailscaleConfig = &tsapi.TailscaleConfig{
AcceptRoutes: true,
}
mustUpdate(t, fc, "", pc.Name, func(p *tsapi.ProxyClass) {
p.Spec = pc.Spec
})
expectReconciled(t, reconciler, "", pg.Name)
expectEqual(t, fc, pg, nil)
expectProxyGroupResources(t, fc, pg, true, "518a86e9fae64f270f8e0ec2a2ea6ca06c10f725035d3d6caca132cd61e42a74")
})
t.Run("enable_metrics", func(t *testing.T) {
pc.Spec.Metrics = &tsapi.Metrics{Enable: true}
mustUpdate(t, fc, "", pc.Name, func(p *tsapi.ProxyClass) {
p.Spec = pc.Spec
})
expectReconciled(t, reconciler, "", pg.Name)
expectEqual(t, fc, expectedMetricsService(opts), nil)
})
t.Run("enable_service_monitor_no_crd", func(t *testing.T) {
pc.Spec.Metrics.ServiceMonitor = &tsapi.ServiceMonitor{Enable: true}
mustUpdate(t, fc, "", pc.Name, func(p *tsapi.ProxyClass) {
p.Spec.Metrics = pc.Spec.Metrics
})
expectReconciled(t, reconciler, "", pg.Name)
})
t.Run("create_crd_expect_service_monitor", func(t *testing.T) {
mustCreate(t, fc, crd)
expectReconciled(t, reconciler, "", pg.Name)
expectEqualUnstructured(t, fc, expectedServiceMonitor(t, opts))
})
t.Run("delete_and_cleanup", func(t *testing.T) {
if err := fc.Delete(context.Background(), pg); err != nil {
t.Fatal(err)
}
expectReconciled(t, reconciler, "", pg.Name)
expectMissing[tsapi.ProxyGroup](t, fc, "", pg.Name)
if expected := 0; reconciler.egressProxyGroups.Len() != expected {
t.Fatalf("expected %d ProxyGroups, got %d", expected, reconciler.egressProxyGroups.Len())
}
// 2 nodes should get deleted as part of the scale down, and then finally
// the first node gets deleted with the ProxyGroup cleanup.
if diff := cmp.Diff(tsClient.deleted, []string{"nodeid-1", "nodeid-2", "nodeid-0"}); diff != "" {
t.Fatalf("unexpected deleted devices (-got +want):\n%s", diff)
}
expectMissing[corev1.Service](t, reconciler, "tailscale", metricsResourceName(pg.Name))
// The fake client does not clean up objects whose owner has been
// deleted, so we can't test for the owned resources getting deleted.
})
}
func TestProxyGroupTypes(t *testing.T) {
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
Build()
zl, _ := zap.NewDevelopment()
reconciler := &ProxyGroupReconciler{
tsNamespace: tsNamespace,
proxyImage: testProxyImage,
Client: fc,
l: zl.Sugar(),
tsClient: &fakeTSClient{},
clock: tstest.NewClock(tstest.ClockOpts{}),
}
t.Run("egress_type", func(t *testing.T) {
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-egress",
UID: "test-egress-uid",
},
Spec: tsapi.ProxyGroupSpec{
Type: tsapi.ProxyGroupTypeEgress,
Replicas: ptr.To[int32](0),
},
}
if err := fc.Create(context.Background(), pg); err != nil {
t.Fatal(err)
}
expectReconciled(t, reconciler, "", pg.Name)
verifyProxyGroupCounts(t, reconciler, 0, 1)
sts := &appsv1.StatefulSet{}
if err := fc.Get(context.Background(), client.ObjectKey{Namespace: tsNamespace, Name: pg.Name}, sts); err != nil {
t.Fatalf("failed to get StatefulSet: %v", err)
}
verifyEnvVar(t, sts, "TS_INTERNAL_APP", kubetypes.AppProxyGroupEgress)
verifyEnvVar(t, sts, "TS_EGRESS_SERVICES_CONFIG_PATH", fmt.Sprintf("/etc/proxies/%s", egressservices.KeyEgressServices))
// Verify that egress configuration has been set up.
cm := &corev1.ConfigMap{}
cmName := fmt.Sprintf("%s-egress-config", pg.Name)
if err := fc.Get(context.Background(), client.ObjectKey{Namespace: tsNamespace, Name: cmName}, cm); err != nil {
t.Fatalf("failed to get ConfigMap: %v", err)
}
expectedVolumes := []corev1.Volume{
{
Name: cmName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: cmName,
},
},
},
},
}
expectedVolumeMounts := []corev1.VolumeMount{
{
Name: cmName,
MountPath: "/etc/proxies",
ReadOnly: true,
},
}
if diff := cmp.Diff(expectedVolumes, sts.Spec.Template.Spec.Volumes); diff != "" {
t.Errorf("unexpected volumes (-want +got):\n%s", diff)
}
if diff := cmp.Diff(expectedVolumeMounts, sts.Spec.Template.Spec.Containers[0].VolumeMounts); diff != "" {
t.Errorf("unexpected volume mounts (-want +got):\n%s", diff)
}
})
t.Run("ingress_type", func(t *testing.T) {
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: "test-ingress",
UID: "test-ingress-uid",
},
Spec: tsapi.ProxyGroupSpec{
Type: tsapi.ProxyGroupTypeIngress,
},
}
if err := fc.Create(context.Background(), pg); err != nil {
t.Fatal(err)
}
expectReconciled(t, reconciler, "", pg.Name)
verifyProxyGroupCounts(t, reconciler, 1, 1)
sts := &appsv1.StatefulSet{}
if err := fc.Get(context.Background(), client.ObjectKey{Namespace: tsNamespace, Name: pg.Name}, sts); err != nil {
t.Fatalf("failed to get StatefulSet: %v", err)
}
verifyEnvVar(t, sts, "TS_INTERNAL_APP", kubetypes.AppProxyGroupIngress)
})
}
func verifyProxyGroupCounts(t *testing.T, r *ProxyGroupReconciler, wantIngress, wantEgress int) {
t.Helper()
if r.ingressProxyGroups.Len() != wantIngress {
t.Errorf("expected %d ingress proxy groups, got %d", wantIngress, r.ingressProxyGroups.Len())
}
if r.egressProxyGroups.Len() != wantEgress {
t.Errorf("expected %d egress proxy groups, got %d", wantEgress, r.egressProxyGroups.Len())
}
}
func verifyEnvVar(t *testing.T, sts *appsv1.StatefulSet, name, expectedValue string) {
t.Helper()
for _, env := range sts.Spec.Template.Spec.Containers[0].Env {
if env.Name == name {
if env.Value != expectedValue {
t.Errorf("expected %s=%s, got %s", name, expectedValue, env.Value)
}
return
}
}
t.Errorf("%s environment variable not found", name)
}
func expectProxyGroupResources(t *testing.T, fc client.WithWatch, pg *tsapi.ProxyGroup, shouldExist bool, cfgHash string) {
t.Helper()
role := pgRole(pg, tsNamespace)
roleBinding := pgRoleBinding(pg, tsNamespace)
serviceAccount := pgServiceAccount(pg, tsNamespace)
statefulSet, err := pgStatefulSet(pg, tsNamespace, testProxyImage, "auto", cfgHash)
if err != nil {
t.Fatal(err)
}
statefulSet.Annotations = defaultProxyClassAnnotations
if shouldExist {
expectEqual(t, fc, role, nil)
expectEqual(t, fc, roleBinding, nil)
expectEqual(t, fc, serviceAccount, nil)
expectEqual(t, fc, statefulSet, nil)
} else {
expectMissing[rbacv1.Role](t, fc, role.Namespace, role.Name)
expectMissing[rbacv1.RoleBinding](t, fc, roleBinding.Namespace, roleBinding.Name)
expectMissing[corev1.ServiceAccount](t, fc, serviceAccount.Namespace, serviceAccount.Name)
expectMissing[appsv1.StatefulSet](t, fc, statefulSet.Namespace, statefulSet.Name)
}
var expectedSecrets []string
if shouldExist {
for i := range pgReplicas(pg) {
expectedSecrets = append(expectedSecrets,
fmt.Sprintf("%s-%d", pg.Name, i),
fmt.Sprintf("%s-%d-config", pg.Name, i),
)
}
}
expectSecrets(t, fc, expectedSecrets)
}
func expectSecrets(t *testing.T, fc client.WithWatch, expected []string) {
t.Helper()
secrets := &corev1.SecretList{}
if err := fc.List(context.Background(), secrets); err != nil {
t.Fatal(err)
}
var actual []string
for _, secret := range secrets.Items {
actual = append(actual, secret.Name)
}
if diff := cmp.Diff(actual, expected); diff != "" {
t.Fatalf("unexpected secrets (-got +want):\n%s", diff)
}
}
func addNodeIDToStateSecrets(t *testing.T, fc client.WithWatch, pg *tsapi.ProxyGroup) {
const key = "profile-abc"
for i := range pgReplicas(pg) {
bytes, err := json.Marshal(map[string]any{
"Config": map[string]any{
"NodeID": fmt.Sprintf("nodeid-%d", i),
},
})
if err != nil {
t.Fatal(err)
}
mustUpdate(t, fc, tsNamespace, fmt.Sprintf("test-%d", i), func(s *corev1.Secret) {
s.Data = map[string][]byte{
currentProfileKey: []byte(key),
key: bytes,
}
})
}
}

View File

@@ -15,6 +15,7 @@ import (
"net/http"
"os"
"slices"
"strconv"
"strings"
"go.uber.org/zap"
@@ -47,11 +48,11 @@ const (
LabelParentType = "tailscale.com/parent-resource-type"
LabelParentName = "tailscale.com/parent-resource"
LabelParentNamespace = "tailscale.com/parent-resource-ns"
labelSecretType = "tailscale.com/secret-type" // "config" or "state".
// LabelProxyClass can be set by users on Connectors, tailscale
// Ingresses and Services that define cluster ingress or cluster egress,
// to specify that configuration in this ProxyClass should be applied to
// resources created for the Connector, Ingress or Service.
// LabelProxyClass can be set by users on tailscale Ingresses and Services that define cluster ingress or
// cluster egress, to specify that configuration in this ProxyClass should be applied to resources created for
// the Ingress or Service.
LabelProxyClass = "tailscale.com/proxy-class"
FinalizerName = "tailscale.com/finalizer"
@@ -65,6 +66,8 @@ const (
//MagicDNS name of tailnet node.
AnnotationTailnetTargetFQDN = "tailscale.com/tailnet-fqdn"
AnnotationProxyGroup = "tailscale.com/proxy-group"
// Annotations settable by users on ingresses.
AnnotationFunnel = "tailscale.com/funnel"
@@ -92,6 +95,12 @@ const (
podAnnotationLastSetTailnetTargetFQDN = "tailscale.com/operator-last-set-ts-tailnet-target-fqdn"
// podAnnotationLastSetConfigFileHash is sha256 hash of the current tailscaled configuration contents.
podAnnotationLastSetConfigFileHash = "tailscale.com/operator-last-set-config-file-hash"
proxyTypeEgress = "egress_service"
proxyTypeIngressService = "ingress_service"
proxyTypeIngressResource = "ingress_resource"
proxyTypeConnector = "connector"
proxyTypeProxyGroup = "proxygroup"
)
var (
@@ -120,6 +129,8 @@ type tailscaleSTSConfig struct {
Hostname string
Tags []string // if empty, use defaultTags
proxyType string
// Connector specifies a configuration of a Connector instance if that's
// what this StatefulSet should be created for.
Connector *connector
@@ -130,10 +141,13 @@ type tailscaleSTSConfig struct {
}
type connector struct {
// routes is a list of subnet routes that this Connector should expose.
// routes is a list of routes that this Connector should advertise either as a subnet router or as an app
// connector.
routes string
// isExitNode defines whether this Connector should act as an exit node.
isExitNode bool
// isAppConnector defines whether this Connector should act as an app connector.
isAppConnector bool
}
type tsnetServer interface {
CertDomains() []string
@@ -184,22 +198,30 @@ func (a *tailscaleSTSReconciler) Provision(ctx context.Context, logger *zap.Suga
}
sts.ProxyClass = proxyClass
secretName, tsConfigHash, configs, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
secretName, tsConfigHash, _, err := a.createOrGetSecret(ctx, logger, sts, hsvc)
if err != nil {
return nil, fmt.Errorf("failed to create or get API key secret: %w", err)
}
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName, tsConfigHash, configs)
_, err = a.reconcileSTS(ctx, logger, sts, hsvc, secretName, tsConfigHash)
if err != nil {
return nil, fmt.Errorf("failed to reconcile statefulset: %w", err)
}
mo := &metricsOpts{
proxyStsName: hsvc.Name,
tsNamespace: hsvc.Namespace,
proxyLabels: hsvc.Labels,
proxyType: sts.proxyType,
}
if err = reconcileMetricsResources(ctx, logger, mo, sts.ProxyClass, a.Client); err != nil {
return nil, fmt.Errorf("failed to ensure metrics resources: %w", err)
}
return hsvc, nil
}
// Cleanup removes all resources associated that were created by Provision with
// the given labels. It returns true when all resources have been removed,
// otherwise it returns false and the caller should retry later.
func (a *tailscaleSTSReconciler) Cleanup(ctx context.Context, logger *zap.SugaredLogger, labels map[string]string) (done bool, _ error) {
func (a *tailscaleSTSReconciler) Cleanup(ctx context.Context, logger *zap.SugaredLogger, labels map[string]string, typ string) (done bool, _ error) {
// Need to delete the StatefulSet first, and delete it with foreground
// cascading deletion. That way, the pod that's writing to the Secret will
// stop running before we start looking at the Secret's contents, and
@@ -225,21 +247,21 @@ func (a *tailscaleSTSReconciler) Cleanup(ctx context.Context, logger *zap.Sugare
return false, nil
}
id, _, _, err := a.DeviceInfo(ctx, labels)
dev, err := a.DeviceInfo(ctx, labels, logger)
if err != nil {
return false, fmt.Errorf("getting device info: %w", err)
}
if id != "" {
logger.Debugf("deleting device %s from control", string(id))
if err := a.tsClient.DeleteDevice(ctx, string(id)); err != nil {
if dev != nil && dev.id != "" {
logger.Debugf("deleting device %s from control", string(dev.id))
if err := a.tsClient.DeleteDevice(ctx, string(dev.id)); err != nil {
errResp := &tailscale.ErrResponse{}
if ok := errors.As(err, errResp); ok && errResp.Status == http.StatusNotFound {
logger.Debugf("device %s not found, likely because it has already been deleted from control", string(id))
logger.Debugf("device %s not found, likely because it has already been deleted from control", string(dev.id))
} else {
return false, fmt.Errorf("deleting device: %w", err)
}
} else {
logger.Debugf("device %s deleted from control", string(id))
logger.Debugf("device %s deleted from control", string(dev.id))
}
}
@@ -252,6 +274,14 @@ func (a *tailscaleSTSReconciler) Cleanup(ctx context.Context, logger *zap.Sugare
return false, err
}
}
mo := &metricsOpts{
proxyLabels: labels,
tsNamespace: a.operatorNamespace,
proxyType: typ,
}
if err := maybeCleanupMetricsResources(ctx, mo, a.Client); err != nil {
return false, fmt.Errorf("error cleaning up metrics resources: %w", err)
}
return true, nil
}
@@ -302,7 +332,7 @@ func (a *tailscaleSTSReconciler) reconcileHeadlessService(ctx context.Context, l
return createOrUpdate(ctx, a.Client, a.operatorNamespace, hsvc, func(svc *corev1.Service) { svc.Spec = hsvc.Spec })
}
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (secretName, hash string, configs tailscaleConfigs, _ error) {
func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *zap.SugaredLogger, stsC *tailscaleSTSConfig, hsvc *corev1.Service) (secretName, hash string, configs tailscaledConfigs, _ error) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
// Hardcode a -0 suffix so that in future, if we support
@@ -360,7 +390,7 @@ func (a *tailscaleSTSReconciler) createOrGetSecret(ctx context.Context, logger *
latest := tailcfg.CapabilityVersion(-1)
var latestConfig ipn.ConfigVAlpha
for key, val := range configs {
fn := tsoperator.TailscaledConfigFileNameForCap(key)
fn := tsoperator.TailscaledConfigFileName(key)
b, err := json.Marshal(val)
if err != nil {
return "", "", nil, fmt.Errorf("error marshalling tailscaled config: %w", err)
@@ -411,40 +441,66 @@ func sanitizeConfigBytes(c ipn.ConfigVAlpha) string {
// that acts as an operator proxy. It retrieves info from a Kubernetes Secret
// labeled with the provided labels.
// Either of device ID, hostname and IPs can be empty string if not found in the Secret.
func (a *tailscaleSTSReconciler) DeviceInfo(ctx context.Context, childLabels map[string]string) (id tailcfg.StableNodeID, hostname string, ips []string, err error) {
func (a *tailscaleSTSReconciler) DeviceInfo(ctx context.Context, childLabels map[string]string, logger *zap.SugaredLogger) (dev *device, err error) {
sec, err := getSingleObject[corev1.Secret](ctx, a.Client, a.operatorNamespace, childLabels)
if err != nil {
return "", "", nil, err
return dev, err
}
if sec == nil {
return "", "", nil, nil
return dev, nil
}
pod := new(corev1.Pod)
if err := a.Get(ctx, types.NamespacedName{Namespace: sec.Namespace, Name: sec.Name}, pod); err != nil && !apierrors.IsNotFound(err) {
return dev, nil
}
return deviceInfo(sec)
return deviceInfo(sec, pod, logger)
}
func deviceInfo(sec *corev1.Secret) (id tailcfg.StableNodeID, hostname string, ips []string, err error) {
id = tailcfg.StableNodeID(sec.Data["device_id"])
// device contains tailscale state of a proxy device as gathered from its tailscale state Secret.
type device struct {
id tailcfg.StableNodeID // device's stable ID
hostname string // MagicDNS name of the device
ips []string // Tailscale IPs of the device
// ingressDNSName is the L7 Ingress DNS name. In practice this will be the same value as hostname, but only set
// when the device has been configured to serve traffic on it via 'tailscale serve'.
ingressDNSName string
}
func deviceInfo(sec *corev1.Secret, pod *corev1.Pod, log *zap.SugaredLogger) (dev *device, err error) {
id := tailcfg.StableNodeID(sec.Data[kubetypes.KeyDeviceID])
if id == "" {
return "", "", nil, nil
return dev, nil
}
dev = &device{id: id}
// Kubernetes chokes on well-formed FQDNs with the trailing dot, so we have
// to remove it.
hostname = strings.TrimSuffix(string(sec.Data["device_fqdn"]), ".")
if hostname == "" {
dev.hostname = strings.TrimSuffix(string(sec.Data[kubetypes.KeyDeviceFQDN]), ".")
if dev.hostname == "" {
// Device ID gets stored and retrieved in a different flow than
// FQDN and IPs. A device that acts as Kubernetes operator
// proxy, but whose route setup has failed might have an device
// proxy, but whose route setup has failed might have a device
// ID, but no FQDN/IPs. If so, return the ID, to allow the
// operator to clean up such devices.
return id, "", nil, nil
return dev, nil
}
if rawDeviceIPs, ok := sec.Data["device_ips"]; ok {
if err := json.Unmarshal(rawDeviceIPs, &ips); err != nil {
return "", "", nil, err
// TODO(irbekrm): we fall back to using the hostname field to determine Ingress's hostname to ensure backwards
// compatibility. In 1.82 we can remove this fallback mechanism.
dev.ingressDNSName = dev.hostname
if proxyCapVer(sec, pod, log) >= 109 {
dev.ingressDNSName = strings.TrimSuffix(string(sec.Data[kubetypes.KeyHTTPSEndpoint]), ".")
if strings.EqualFold(dev.ingressDNSName, kubetypes.ValueNoHTTPS) {
dev.ingressDNSName = ""
}
}
return id, hostname, ips, nil
if rawDeviceIPs, ok := sec.Data[kubetypes.KeyDeviceIPs]; ok {
ips := make([]string, 0)
if err := json.Unmarshal(rawDeviceIPs, &ips); err != nil {
return nil, err
}
dev.ips = ips
}
return dev, nil
}
func newAuthKey(ctx context.Context, tsClient tsClient, tags []string) (string, error) {
@@ -471,7 +527,7 @@ var proxyYaml []byte
//go:embed deploy/manifests/userspace-proxy.yaml
var userspaceProxyYaml []byte
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, proxySecret, tsConfigHash string, configs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha) (*appsv1.StatefulSet, error) {
func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.SugaredLogger, sts *tailscaleSTSConfig, headlessSvc *corev1.Service, proxySecret, tsConfigHash string) (*appsv1.StatefulSet, error) {
ss := new(appsv1.StatefulSet)
if sts.ServeConfig != nil && sts.ForwardClusterTrafficViaL7IngressProxy != true { // If forwarding cluster traffic via is required we need non-userspace + NET_ADMIN + forwarding
if err := yaml.Unmarshal(userspaceProxyYaml, &ss); err != nil {
@@ -516,11 +572,6 @@ func (a *tailscaleSTSReconciler) reconcileSTS(ctx context.Context, logger *zap.S
Name: "TS_KUBE_SECRET",
Value: proxySecret,
},
corev1.EnvVar{
// Old tailscaled config key is still used for backwards compatibility.
Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH",
Value: "/etc/tsconfig/tailscaled",
},
corev1.EnvVar{
// New style is in the form of cap-<capability-version>.hujson.
Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR",
@@ -666,24 +717,42 @@ func mergeStatefulSetLabelsOrAnnots(current, custom map[string]string, managed [
return custom
}
func debugSetting(pc *tsapi.ProxyClass) bool {
if pc == nil ||
pc.Spec.StatefulSet == nil ||
pc.Spec.StatefulSet.Pod == nil ||
pc.Spec.StatefulSet.Pod.TailscaleContainer == nil ||
pc.Spec.StatefulSet.Pod.TailscaleContainer.Debug == nil {
// This default will change to false in 1.82.0.
return pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable
}
return pc.Spec.StatefulSet.Pod.TailscaleContainer.Debug.Enable
}
func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet, stsCfg *tailscaleSTSConfig, logger *zap.SugaredLogger) *appsv1.StatefulSet {
if pc == nil || ss == nil {
return ss
}
if pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable {
if stsCfg.TailnetTargetFQDN == "" && stsCfg.TailnetTargetIP == "" && !stsCfg.ForwardClusterTrafficViaL7IngressProxy {
enableMetrics(ss, pc)
} else if stsCfg.ForwardClusterTrafficViaL7IngressProxy {
metricsEnabled := pc.Spec.Metrics != nil && pc.Spec.Metrics.Enable
debugEnabled := debugSetting(pc)
if metricsEnabled || debugEnabled {
isEgress := stsCfg != nil && (stsCfg.TailnetTargetFQDN != "" || stsCfg.TailnetTargetIP != "")
isForwardingL7Ingress := stsCfg != nil && stsCfg.ForwardClusterTrafficViaL7IngressProxy
if isEgress {
// TODO (irbekrm): fix this
// For Ingress proxies that have been configured with
// tailscale.com/experimental-forward-cluster-traffic-via-ingress
// annotation, all cluster traffic is forwarded to the
// Ingress backend(s).
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
} else {
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for egress proxies.")
} else if isForwardingL7Ingress {
// TODO (irbekrm): fix this
// For egress proxies, currently all cluster traffic is forwarded to the tailnet target.
logger.Info("ProxyClass specifies that metrics should be enabled, but this is currently not supported for Ingress proxies that accept cluster traffic.")
} else {
enableEndpoints(ss, metricsEnabled, debugEnabled)
}
}
@@ -692,7 +761,7 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet,
}
// Update StatefulSet metadata.
if wantsSSLabels := pc.Spec.StatefulSet.Labels; len(wantsSSLabels) > 0 {
if wantsSSLabels := pc.Spec.StatefulSet.Labels.Parse(); len(wantsSSLabels) > 0 {
ss.ObjectMeta.Labels = mergeStatefulSetLabelsOrAnnots(ss.ObjectMeta.Labels, wantsSSLabels, tailscaleManagedLabels)
}
if wantsSSAnnots := pc.Spec.StatefulSet.Annotations; len(wantsSSAnnots) > 0 {
@@ -704,7 +773,7 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet,
return ss
}
wantsPod := pc.Spec.StatefulSet.Pod
if wantsPodLabels := wantsPod.Labels; len(wantsPodLabels) > 0 {
if wantsPodLabels := wantsPod.Labels.Parse(); len(wantsPodLabels) > 0 {
ss.Spec.Template.ObjectMeta.Labels = mergeStatefulSetLabelsOrAnnots(ss.Spec.Template.ObjectMeta.Labels, wantsPodLabels, tailscaleManagedLabels)
}
if wantsPodAnnots := wantsPod.Annotations; len(wantsPodAnnots) > 0 {
@@ -716,6 +785,7 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet,
ss.Spec.Template.Spec.NodeSelector = wantsPod.NodeSelector
ss.Spec.Template.Spec.Affinity = wantsPod.Affinity
ss.Spec.Template.Spec.Tolerations = wantsPod.Tolerations
ss.Spec.Template.Spec.TopologySpreadConstraints = wantsPod.TopologySpreadConstraints
// Update containers.
updateContainer := func(overlay *tsapi.Container, base corev1.Container) corev1.Container {
@@ -760,16 +830,58 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet,
return ss
}
func enableMetrics(ss *appsv1.StatefulSet, pc *tsapi.ProxyClass) {
func enableEndpoints(ss *appsv1.StatefulSet, metrics, debug bool) {
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
// Serve metrics on on <pod-ip>:9001/debug/metrics. If
// we didn't specify Pod IP here, the proxy would, in
// some cases, also listen to its Tailscale IP- we don't
// want folks to start relying on this side-effect as a
// feature.
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports, corev1.ContainerPort{Name: "metrics", Protocol: "TCP", HostPort: 9001, ContainerPort: 9001})
if debug {
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env,
// Serve tailscaled's debug metrics on on
// <pod-ip>:9001/debug/metrics. If we didn't specify Pod IP
// here, the proxy would, in some cases, also listen to its
// Tailscale IP- we don't want folks to start relying on this
// side-effect as a feature.
corev1.EnvVar{
Name: "TS_DEBUG_ADDR_PORT",
Value: "$(POD_IP):9001",
},
// TODO(tomhjp): Can remove this env var once 1.76.x is no
// longer supported.
corev1.EnvVar{
Name: "TS_TAILSCALED_EXTRA_ARGS",
Value: "--debug=$(TS_DEBUG_ADDR_PORT)",
},
)
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports,
corev1.ContainerPort{
Name: "debug",
Protocol: "TCP",
ContainerPort: 9001,
},
)
}
if metrics {
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env,
// Serve client metrics on <pod-ip>:9002/metrics.
corev1.EnvVar{
Name: "TS_LOCAL_ADDR_PORT",
Value: "$(POD_IP):9002",
},
corev1.EnvVar{
Name: "TS_ENABLE_METRICS",
Value: "true",
},
)
ss.Spec.Template.Spec.Containers[i].Ports = append(ss.Spec.Template.Spec.Containers[i].Ports,
corev1.ContainerPort{
Name: "metrics",
Protocol: "TCP",
ContainerPort: 9002,
},
)
}
break
}
}
@@ -783,37 +895,29 @@ func readAuthKey(secret *corev1.Secret, key string) (*string, error) {
return origConf.AuthKey, nil
}
// tailscaledConfig takes a proxy config, a newly generated auth key if
// generated and a Secret with the previous proxy state and auth key and
// returns tailscaled configuration and a hash of that configuration.
//
// As of 2024-05-09 it also returns legacy tailscaled config without the
// later added NoStatefulFilter field to support proxies older than cap95.
// TODO (irbekrm): remove the legacy config once we no longer need to support
// versions older than cap94,
// https://tailscale.com/kb/1236/kubernetes-operator#operator-and-proxies
func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *corev1.Secret) (tailscaleConfigs, error) {
// tailscaledConfig takes a proxy config, a newly generated auth key if generated and a Secret with the previous proxy
// state and auth key and returns tailscaled config files for currently supported proxy versions and a hash of that
// configuration.
func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *corev1.Secret) (tailscaledConfigs, error) {
conf := &ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
AcceptRoutes: "false", // AcceptRoutes defaults to true
Locked: "false",
Hostname: &stsC.Hostname,
NoStatefulFiltering: "false",
NoStatefulFiltering: "true", // Explicitly enforce default value, see #14216
AppConnector: &ipn.AppConnectorPrefs{Advertise: false},
}
// For egress proxies only, we need to ensure that stateful filtering is
// not in place so that traffic from cluster can be forwarded via
// Tailscale IPs.
if stsC.TailnetTargetFQDN != "" || stsC.TailnetTargetIP != "" {
conf.NoStatefulFiltering = "true"
}
if stsC.Connector != nil {
routes, err := netutil.CalcAdvertiseRoutes(stsC.Connector.routes, stsC.Connector.isExitNode)
if err != nil {
return nil, fmt.Errorf("error calculating routes: %w", err)
}
conf.AdvertiseRoutes = routes
if stsC.Connector.isAppConnector {
conf.AppConnector.Advertise = true
}
}
if shouldAcceptRoutes(stsC.ProxyClass) {
conf.AcceptRoutes = "true"
@@ -821,40 +925,56 @@ func tailscaledConfig(stsC *tailscaleSTSConfig, newAuthkey string, oldSecret *co
if newAuthkey != "" {
conf.AuthKey = &newAuthkey
} else if oldSecret != nil {
var err error
latest := tailcfg.CapabilityVersion(-1)
latestStr := ""
for k, data := range oldSecret.Data {
// write to StringData, read from Data as StringData is write-only
if len(data) == 0 {
continue
}
v, err := tsoperator.CapVerFromFileName(k)
if err != nil {
continue
}
if v > latest {
latestStr = k
latest = v
}
} else if shouldRetainAuthKey(oldSecret) {
key, err := authKeyFromSecret(oldSecret)
if err != nil {
return nil, fmt.Errorf("error retrieving auth key from Secret: %w", err)
}
// Allow for configs that don't contain an auth key. Perhaps
// users have some mechanisms to delete them. Auth key is
// normally not needed after the initial login.
if latestStr != "" {
conf.AuthKey, err = readAuthKey(oldSecret, latestStr)
if err != nil {
return nil, err
}
conf.AuthKey = key
}
capVerConfigs := make(map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha)
capVerConfigs[107] = *conf
// AppConnector config option is only understood by clients of capver 107 and newer.
conf.AppConnector = nil
capVerConfigs[95] = *conf
return capVerConfigs, nil
}
func authKeyFromSecret(s *corev1.Secret) (key *string, err error) {
latest := tailcfg.CapabilityVersion(-1)
latestStr := ""
for k, data := range s.Data {
// write to StringData, read from Data as StringData is write-only
if len(data) == 0 {
continue
}
v, err := tsoperator.CapVerFromFileName(k)
if err != nil {
continue
}
if v > latest {
latestStr = k
latest = v
}
}
capVerConfigs := make(map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha)
capVerConfigs[95] = *conf
// legacy config should not contain NoStatefulFiltering field.
conf.NoStatefulFiltering.Clear()
capVerConfigs[94] = *conf
return capVerConfigs, nil
// Allow for configs that don't contain an auth key. Perhaps
// users have some mechanisms to delete them. Auth key is
// normally not needed after the initial login.
if latestStr != "" {
return readAuthKey(s, latestStr)
}
return key, nil
}
// shouldRetainAuthKey returns true if the state stored in a proxy's state Secret suggests that auth key should be
// retained (because the proxy has not yet successfully authenticated).
func shouldRetainAuthKey(s *corev1.Secret) bool {
if s == nil {
return false // nothing to retain here
}
return len(s.Data["device_id"]) == 0 // proxy has not authed yet
}
func shouldAcceptRoutes(pc *tsapi.ProxyClass) bool {
@@ -868,7 +988,7 @@ type ptrObject[T any] interface {
*T
}
type tailscaleConfigs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha
type tailscaledConfigs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha
// hashBytes produces a hash for the provided tailscaled config that is the same across
// different invocations of this code. We do not use the
@@ -879,7 +999,7 @@ type tailscaleConfigs map[tailcfg.CapabilityVersion]ipn.ConfigVAlpha
// thing that changed is operator version (the hash is also exposed to users via
// an annotation and might be confusing if it changes without the config having
// changed).
func tailscaledConfigHash(c tailscaleConfigs) (string, error) {
func tailscaledConfigHash(c tailscaledConfigs) (string, error) {
b, err := json.Marshal(c)
if err != nil {
return "", fmt.Errorf("error marshalling tailscaled configs: %w", err)
@@ -991,3 +1111,23 @@ func nameForService(svc *corev1.Service) string {
func isValidFirewallMode(m string) bool {
return m == "auto" || m == "nftables" || m == "iptables"
}
// proxyCapVer accepts a proxy state Secret and a proxy Pod returns the capability version of a proxy Pod.
// This is best effort - if the capability version can not (currently) be determined, it returns -1.
func proxyCapVer(sec *corev1.Secret, pod *corev1.Pod, log *zap.SugaredLogger) tailcfg.CapabilityVersion {
if sec == nil || pod == nil {
return tailcfg.CapabilityVersion(-1)
}
if len(sec.Data[kubetypes.KeyCapVer]) == 0 || len(sec.Data[kubetypes.KeyPodUID]) == 0 {
return tailcfg.CapabilityVersion(-1)
}
capVer, err := strconv.Atoi(string(sec.Data[kubetypes.KeyCapVer]))
if err != nil {
log.Infof("[unexpected]: unexpected capability version in proxy's state Secret, expected an integer, got %q", string(sec.Data[kubetypes.KeyCapVer]))
return tailcfg.CapabilityVersion(-1)
}
if !strings.EqualFold(string(pod.ObjectMeta.UID), string(sec.Data[kubetypes.KeyPodUID])) {
return tailcfg.CapabilityVersion(-1)
}
return tailcfg.CapabilityVersion(capVer)
}

View File

@@ -18,6 +18,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/yaml"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/types/ptr"
@@ -60,10 +61,10 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
proxyClassAllOpts := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Labels: tsapi.Labels{"foo": "bar"},
Annotations: map[string]string{"foo.io/bar": "foo"},
Pod: &tsapi.Pod{
Labels: map[string]string{"bar": "foo"},
Labels: tsapi.Labels{"bar": "foo"},
Annotations: map[string]string{"bar.io/foo": "foo"},
SecurityContext: &corev1.PodSecurityContext{
RunAsUser: ptr.To(int64(0)),
@@ -73,6 +74,16 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
NodeSelector: map[string]string{"beta.kubernetes.io/os": "linux"},
Affinity: &corev1.Affinity{NodeAffinity: &corev1.NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{}}},
Tolerations: []corev1.Toleration{{Key: "", Operator: "Exists"}},
TopologySpreadConstraints: []corev1.TopologySpreadConstraint{
{
WhenUnsatisfiable: "DoNotSchedule",
TopologyKey: "kubernetes.io/hostname",
MaxSkew: 3,
LabelSelector: &metav1.LabelSelector{
MatchLabels: map[string]string{"foo": "bar"},
},
},
},
TailscaleContainer: &tsapi.Container{
SecurityContext: &corev1.SecurityContext{
Privileged: ptr.To(true),
@@ -105,21 +116,36 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
proxyClassJustLabels := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Labels: map[string]string{"foo": "bar"},
Labels: tsapi.Labels{"foo": "bar"},
Annotations: map[string]string{"foo.io/bar": "foo"},
Pod: &tsapi.Pod{
Labels: map[string]string{"bar": "foo"},
Labels: tsapi.Labels{"bar": "foo"},
Annotations: map[string]string{"bar.io/foo": "foo"},
},
},
},
}
proxyClassMetrics := &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
Metrics: &tsapi.Metrics{Enable: true},
},
}
proxyClassWithMetricsDebug := func(metrics bool, debug *bool) *tsapi.ProxyClass {
return &tsapi.ProxyClass{
Spec: tsapi.ProxyClassSpec{
Metrics: &tsapi.Metrics{Enable: metrics},
StatefulSet: func() *tsapi.StatefulSet {
if debug == nil {
return nil
}
return &tsapi.StatefulSet{
Pod: &tsapi.Pod{
TailscaleContainer: &tsapi.Container{
Debug: &tsapi.Debug{Enable: *debug},
},
},
}
}(),
},
}
}
var userspaceProxySS, nonUserspaceProxySS appsv1.StatefulSet
if err := yaml.Unmarshal(userspaceProxyYaml, &userspaceProxySS); err != nil {
t.Fatalf("unmarshaling userspace proxy template: %v", err)
@@ -149,9 +175,9 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
// 1. Test that a ProxyClass with all fields set gets correctly applied
// to a Statefulset built from non-userspace proxy template.
wantSS := nonUserspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassAllOpts.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassAllOpts.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassAllOpts.Spec.StatefulSet.Pod.Labels
updateMap(wantSS.ObjectMeta.Labels, proxyClassAllOpts.Spec.StatefulSet.Labels.Parse())
updateMap(wantSS.ObjectMeta.Annotations, proxyClassAllOpts.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassAllOpts.Spec.StatefulSet.Pod.Labels.Parse()
wantSS.Spec.Template.Annotations = proxyClassAllOpts.Spec.StatefulSet.Pod.Annotations
wantSS.Spec.Template.Spec.SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.SecurityContext
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
@@ -159,6 +185,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.TopologySpreadConstraints = proxyClassAllOpts.Spec.StatefulSet.Pod.TopologySpreadConstraints
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.InitContainers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleInitContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
@@ -172,28 +199,28 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
gotSS := applyProxyClassToStatefulSet(proxyClassAllOpts, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
t.Errorf("Unexpected result applying ProxyClass with all fields set to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
// 2. Test that a ProxyClass with custom labels and annotations for
// StatefulSet and Pod set gets correctly applied to a Statefulset built
// from non-userspace proxy template.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassJustLabels.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
updateMap(wantSS.ObjectMeta.Labels, proxyClassJustLabels.Spec.StatefulSet.Labels.Parse())
updateMap(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels.Parse()
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
t.Errorf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for non-userspace proxy (-got +want):\n%s", diff)
}
// 3. Test that a ProxyClass with all fields set gets correctly applied
// to a Statefulset built from a userspace proxy template.
wantSS = userspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassAllOpts.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassAllOpts.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassAllOpts.Spec.StatefulSet.Pod.Labels
updateMap(wantSS.ObjectMeta.Labels, proxyClassAllOpts.Spec.StatefulSet.Labels.Parse())
updateMap(wantSS.ObjectMeta.Annotations, proxyClassAllOpts.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassAllOpts.Spec.StatefulSet.Pod.Labels.Parse()
wantSS.Spec.Template.Annotations = proxyClassAllOpts.Spec.StatefulSet.Pod.Annotations
wantSS.Spec.Template.Spec.SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.SecurityContext
wantSS.Spec.Template.Spec.ImagePullSecrets = proxyClassAllOpts.Spec.StatefulSet.Pod.ImagePullSecrets
@@ -201,6 +228,7 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.NodeSelector = proxyClassAllOpts.Spec.StatefulSet.Pod.NodeSelector
wantSS.Spec.Template.Spec.Affinity = proxyClassAllOpts.Spec.StatefulSet.Pod.Affinity
wantSS.Spec.Template.Spec.Tolerations = proxyClassAllOpts.Spec.StatefulSet.Pod.Tolerations
wantSS.Spec.Template.Spec.TopologySpreadConstraints = proxyClassAllOpts.Spec.StatefulSet.Pod.TopologySpreadConstraints
wantSS.Spec.Template.Spec.Containers[0].SecurityContext = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.SecurityContext
wantSS.Spec.Template.Spec.Containers[0].Resources = proxyClassAllOpts.Spec.StatefulSet.Pod.TailscaleContainer.Resources
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{{Name: "foo", Value: "bar"}, {Name: "TS_USERSPACE", Value: "true"}, {Name: "bar"}}...)
@@ -208,36 +236,61 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
wantSS.Spec.Template.Spec.Containers[0].Image = "ghcr.io/my-repo/tailscale:v0.01testsomething"
gotSS = applyProxyClassToStatefulSet(proxyClassAllOpts, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with all options to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
t.Errorf("Unexpected result applying ProxyClass with all options to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
// 4. Test that a ProxyClass with custom labels and annotations gets correctly applied
// to a Statefulset built from a userspace proxy template.
wantSS = userspaceProxySS.DeepCopy()
wantSS.ObjectMeta.Labels = mergeMapKeys(wantSS.ObjectMeta.Labels, proxyClassJustLabels.Spec.StatefulSet.Labels)
wantSS.ObjectMeta.Annotations = mergeMapKeys(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels
updateMap(wantSS.ObjectMeta.Labels, proxyClassJustLabels.Spec.StatefulSet.Labels.Parse())
updateMap(wantSS.ObjectMeta.Annotations, proxyClassJustLabels.Spec.StatefulSet.Annotations)
wantSS.Spec.Template.Labels = proxyClassJustLabels.Spec.StatefulSet.Pod.Labels.Parse()
wantSS.Spec.Template.Annotations = proxyClassJustLabels.Spec.StatefulSet.Pod.Annotations
gotSS = applyProxyClassToStatefulSet(proxyClassJustLabels, userspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
t.Errorf("Unexpected result applying ProxyClass with custom labels and annotations to a StatefulSet for a userspace proxy (-got +want):\n%s", diff)
}
// 5. Test that a ProxyClass with metrics enabled gets correctly applied to a StatefulSet.
// 5. Metrics enabled defaults to enabling both metrics and debug.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(POD_IP):9001"})
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "metrics", Protocol: "TCP", ContainerPort: 9001, HostPort: 9001}}
gotSS = applyProxyClassToStatefulSet(proxyClassMetrics, nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{Name: "TS_DEBUG_ADDR_PORT", Value: "$(POD_IP):9001"},
corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(TS_DEBUG_ADDR_PORT)"},
corev1.EnvVar{Name: "TS_LOCAL_ADDR_PORT", Value: "$(POD_IP):9002"},
corev1.EnvVar{Name: "TS_ENABLE_METRICS", Value: "true"},
)
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{
{Name: "debug", Protocol: "TCP", ContainerPort: 9001},
{Name: "metrics", Protocol: "TCP", ContainerPort: 9002},
}
gotSS = applyProxyClassToStatefulSet(proxyClassWithMetricsDebug(true, nil), nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Fatalf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
t.Errorf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
}
func mergeMapKeys(a, b map[string]string) map[string]string {
for key, val := range b {
a[key] = val
// 6. Enable _just_ metrics by explicitly disabling debug.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{Name: "TS_LOCAL_ADDR_PORT", Value: "$(POD_IP):9002"},
corev1.EnvVar{Name: "TS_ENABLE_METRICS", Value: "true"},
)
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "metrics", Protocol: "TCP", ContainerPort: 9002}}
gotSS = applyProxyClassToStatefulSet(proxyClassWithMetricsDebug(true, ptr.To(false)), nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Errorf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
// 7. Enable _just_ debug without metrics.
wantSS = nonUserspaceProxySS.DeepCopy()
wantSS.Spec.Template.Spec.Containers[0].Env = append(wantSS.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{Name: "TS_DEBUG_ADDR_PORT", Value: "$(POD_IP):9001"},
corev1.EnvVar{Name: "TS_TAILSCALED_EXTRA_ARGS", Value: "--debug=$(TS_DEBUG_ADDR_PORT)"},
)
wantSS.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{{Name: "debug", Protocol: "TCP", ContainerPort: 9001}}
gotSS = applyProxyClassToStatefulSet(proxyClassWithMetricsDebug(false, ptr.To(true)), nonUserspaceProxySS.DeepCopy(), new(tailscaleSTSConfig), zl.Sugar())
if diff := cmp.Diff(gotSS, wantSS); diff != "" {
t.Errorf("Unexpected result applying ProxyClass with metrics enabled to a StatefulSet (-got +want):\n%s", diff)
}
return a
}
func Test_mergeStatefulSetLabelsOrAnnots(t *testing.T) {
@@ -331,3 +384,10 @@ func Test_mergeStatefulSetLabelsOrAnnots(t *testing.T) {
})
}
}
// updateMap updates map a with the values from map b.
func updateMap(a, b map[string]string) {
for key, val := range b {
a[key] = val
}
}

View File

@@ -64,7 +64,7 @@ type ServiceReconciler struct {
clock tstime.Clock
proxyDefaultClass string
defaultProxyClass string
}
var (
@@ -112,12 +112,24 @@ func (a *ServiceReconciler) Reconcile(ctx context.Context, req reconcile.Request
return reconcile.Result{}, fmt.Errorf("failed to get svc: %w", err)
}
if _, ok := svc.Annotations[AnnotationProxyGroup]; ok {
return reconcile.Result{}, nil // this reconciler should not look at Services for ProxyGroup
}
if !svc.DeletionTimestamp.IsZero() || !a.isTailscaleService(svc) {
logger.Debugf("service is being deleted or is (no longer) referring to Tailscale ingress/egress, ensuring any created resources are cleaned up")
return reconcile.Result{}, a.maybeCleanup(ctx, logger, svc)
}
return reconcile.Result{}, a.maybeProvision(ctx, logger, svc)
if err := a.maybeProvision(ctx, logger, svc); err != nil {
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
logger.Infof("optimistic lock error, retrying: %s", err)
} else {
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
// maybeCleanup removes any existing resources related to serving svc over tailscale.
@@ -127,7 +139,7 @@ func (a *ServiceReconciler) Reconcile(ctx context.Context, req reconcile.Request
func (a *ServiceReconciler) maybeCleanup(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) (err error) {
oldSvcStatus := svc.Status.DeepCopy()
defer func() {
if !apiequality.Semantic.DeepEqual(oldSvcStatus, svc.Status) {
if !apiequality.Semantic.DeepEqual(oldSvcStatus, &svc.Status) {
// An error encountered here should get returned by the Reconcile function.
err = errors.Join(err, a.Client.Status().Update(ctx, svc))
}
@@ -148,7 +160,12 @@ func (a *ServiceReconciler) maybeCleanup(ctx context.Context, logger *zap.Sugare
return nil
}
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(svc.Name, svc.Namespace, "svc")); err != nil {
proxyTyp := proxyTypeEgress
if a.shouldExpose(svc) {
proxyTyp = proxyTypeIngressService
}
if done, err := a.ssr.Cleanup(ctx, logger, childResourceLabels(svc.Name, svc.Namespace, "svc"), proxyTyp); err != nil {
return fmt.Errorf("failed to cleanup: %w", err)
} else if !done {
logger.Debugf("cleanup not done yet, waiting for next reconcile")
@@ -187,7 +204,7 @@ func (a *ServiceReconciler) maybeCleanup(ctx context.Context, logger *zap.Sugare
func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.SugaredLogger, svc *corev1.Service) (err error) {
oldSvcStatus := svc.Status.DeepCopy()
defer func() {
if !apiequality.Semantic.DeepEqual(oldSvcStatus, svc.Status) {
if !apiequality.Semantic.DeepEqual(oldSvcStatus, &svc.Status) {
// An error encountered here should get returned by the Reconcile function.
err = errors.Join(err, a.Client.Status().Update(ctx, svc))
}
@@ -211,7 +228,7 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil
}
proxyClass := proxyClassForObject(svc, a.proxyDefaultClass)
proxyClass := proxyClassForObject(svc, a.defaultProxyClass)
if proxyClass != "" {
if ready, err := proxyClassIsReady(ctx, proxyClass, a.Client); err != nil {
errMsg := fmt.Errorf("error verifying ProxyClass for Service: %w", err)
@@ -252,6 +269,10 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
ChildResourceLabels: crl,
ProxyClassName: proxyClass,
}
sts.proxyType = proxyTypeEgress
if a.shouldExpose(svc) {
sts.proxyType = proxyTypeIngressService
}
a.mu.Lock()
if a.shouldExposeClusterIP(svc) {
@@ -307,11 +328,11 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil
}
_, tsHost, tsIPs, err := a.ssr.DeviceInfo(ctx, crl)
dev, err := a.ssr.DeviceInfo(ctx, crl, logger)
if err != nil {
return fmt.Errorf("failed to get device ID: %w", err)
}
if tsHost == "" {
if dev == nil || dev.hostname == "" {
msg := "no Tailscale hostname known yet, waiting for proxy pod to finish auth"
logger.Debug(msg)
// No hostname yet. Wait for the proxy pod to auth.
@@ -320,9 +341,9 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
return nil
}
logger.Debugf("setting Service LoadBalancer status to %q, %s", tsHost, strings.Join(tsIPs, ", "))
logger.Debugf("setting Service LoadBalancer status to %q, %s", dev.hostname, strings.Join(dev.ips, ", "))
ingress := []corev1.LoadBalancerIngress{
{Hostname: tsHost},
{Hostname: dev.hostname},
}
clusterIPAddr, err := netip.ParseAddr(svc.Spec.ClusterIP)
if err != nil {
@@ -330,7 +351,7 @@ func (a *ServiceReconciler) maybeProvision(ctx context.Context, logger *zap.Suga
tsoperator.SetServiceCondition(svc, tsapi.ProxyReady, metav1.ConditionFalse, reasonProxyFailed, msg, a.clock, logger)
return errors.New(msg)
}
for _, ip := range tsIPs {
for _, ip := range dev.ips {
addr, err := netip.ParseAddr(ip)
if err != nil {
continue
@@ -354,6 +375,15 @@ func validateService(svc *corev1.Service) []string {
violations = append(violations, fmt.Sprintf("invalid value of annotation %s: %q does not appear to be a valid MagicDNS name", AnnotationTailnetTargetFQDN, fqdn))
}
}
if ipStr := svc.Annotations[AnnotationTailnetTargetIP]; ipStr != "" {
ip, err := netip.ParseAddr(ipStr)
if err != nil {
violations = append(violations, fmt.Sprintf("invalid value of annotation %s: %q could not be parsed as a valid IP Address, error: %s", AnnotationTailnetTargetIP, ipStr, err))
} else if !ip.IsValid() {
violations = append(violations, fmt.Sprintf("parsed IP address in annotation %s: %q is not valid", AnnotationTailnetTargetIP, ipStr))
}
}
svcName := nameForService(svc)
if err := dnsname.ValidLabel(svcName); err != nil {
if _, ok := svc.Annotations[AnnotationHostname]; ok {

View File

@@ -8,6 +8,7 @@ package main
import (
"context"
"encoding/json"
"fmt"
"net/netip"
"reflect"
"strings"
@@ -21,6 +22,7 @@ import (
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -39,7 +41,10 @@ type configOpts struct {
secretName string
hostname string
namespace string
tailscaleNamespace string
namespaced bool
parentType string
proxyType string
priorityClassName string
firewallMode string
tailnetTargetIP string
@@ -48,11 +53,18 @@ type configOpts struct {
clusterTargetDNS string
subnetRoutes string
isExitNode bool
isAppConnector bool
confFileHash string
serveConfig *ipn.ServeConfig
shouldEnableForwardingClusterTrafficViaIngress bool
proxyClass string // configuration from the named ProxyClass should be applied to proxy resources
app string
shouldRemoveAuthKey bool
secretExtraData map[string][]byte
resourceVersion string
enableMetrics bool
serviceMonitorLabels tsapi.Labels
}
func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.StatefulSet {
@@ -67,14 +79,13 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
Env: []corev1.EnvVar{
{Name: "TS_USERSPACE", Value: "false"},
{Name: "POD_IP", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "status.podIP"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "POD_NAME", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "metadata.name"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "POD_UID", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "metadata.uid"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "TS_KUBE_SECRET", Value: opts.secretName},
{Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"},
{Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR", Value: "/etc/tsconfig"},
},
SecurityContext: &corev1.SecurityContext{
Capabilities: &corev1.Capabilities{
Add: []corev1.Capability{"NET_ADMIN"},
},
Privileged: ptr.To(true),
},
ImagePullPolicy: "Always",
}
@@ -148,6 +159,29 @@ func expectedSTS(t *testing.T, cl client.Client, opts configOpts) *appsv1.Statef
Name: "TS_INTERNAL_APP",
Value: opts.app,
})
if opts.enableMetrics {
tsContainer.Env = append(tsContainer.Env,
corev1.EnvVar{
Name: "TS_DEBUG_ADDR_PORT",
Value: "$(POD_IP):9001"},
corev1.EnvVar{
Name: "TS_TAILSCALED_EXTRA_ARGS",
Value: "--debug=$(TS_DEBUG_ADDR_PORT)",
},
corev1.EnvVar{
Name: "TS_LOCAL_ADDR_PORT",
Value: "$(POD_IP):9002",
},
corev1.EnvVar{
Name: "TS_ENABLE_METRICS",
Value: "true",
},
)
tsContainer.Ports = append(tsContainer.Ports,
corev1.ContainerPort{Name: "debug", ContainerPort: 9001, Protocol: "TCP"},
corev1.ContainerPort{Name: "metrics", ContainerPort: 9002, Protocol: "TCP"},
)
}
ss := &appsv1.StatefulSet{
TypeMeta: metav1.TypeMeta{
Kind: "StatefulSet",
@@ -226,8 +260,9 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
Env: []corev1.EnvVar{
{Name: "TS_USERSPACE", Value: "true"},
{Name: "POD_IP", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "status.podIP"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "POD_NAME", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "metadata.name"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "POD_UID", ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{APIVersion: "", FieldPath: "metadata.uid"}, ResourceFieldRef: nil, ConfigMapKeyRef: nil, SecretKeyRef: nil}},
{Name: "TS_KUBE_SECRET", Value: opts.secretName},
{Name: "EXPERIMENTAL_TS_CONFIGFILE_PATH", Value: "/etc/tsconfig/tailscaled"},
{Name: "TS_EXPERIMENTAL_VERSIONED_CONFIG_DIR", Value: "/etc/tsconfig"},
{Name: "TS_SERVE_CONFIG", Value: "/etc/tailscaled/serve-config"},
{Name: "TS_INTERNAL_APP", Value: opts.app},
@@ -238,6 +273,29 @@ func expectedSTSUserspace(t *testing.T, cl client.Client, opts configOpts) *apps
{Name: "serve-config", ReadOnly: true, MountPath: "/etc/tailscaled"},
},
}
if opts.enableMetrics {
tsContainer.Env = append(tsContainer.Env,
corev1.EnvVar{
Name: "TS_DEBUG_ADDR_PORT",
Value: "$(POD_IP):9001"},
corev1.EnvVar{
Name: "TS_TAILSCALED_EXTRA_ARGS",
Value: "--debug=$(TS_DEBUG_ADDR_PORT)",
},
corev1.EnvVar{
Name: "TS_LOCAL_ADDR_PORT",
Value: "$(POD_IP):9002",
},
corev1.EnvVar{
Name: "TS_ENABLE_METRICS",
Value: "true",
},
)
tsContainer.Ports = append(tsContainer.Ports, corev1.ContainerPort{
Name: "debug", ContainerPort: 9001, Protocol: "TCP"},
corev1.ContainerPort{Name: "metrics", ContainerPort: 9002, Protocol: "TCP"},
)
}
volumes := []corev1.Volume{
{
Name: "tailscaledconfig",
@@ -332,6 +390,90 @@ func expectedHeadlessService(name string, parentType string) *corev1.Service {
}
}
func expectedMetricsService(opts configOpts) *corev1.Service {
labels := metricsLabels(opts)
selector := map[string]string{
"tailscale.com/managed": "true",
"tailscale.com/parent-resource": "test",
"tailscale.com/parent-resource-type": opts.parentType,
}
if opts.namespaced {
selector["tailscale.com/parent-resource-ns"] = opts.namespace
}
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: metricsResourceName(opts.stsName),
Namespace: opts.tailscaleNamespace,
Labels: labels,
},
Spec: corev1.ServiceSpec{
Selector: selector,
Type: corev1.ServiceTypeClusterIP,
Ports: []corev1.ServicePort{{Protocol: "TCP", Port: 9002, Name: "metrics"}},
},
}
}
func metricsLabels(opts configOpts) map[string]string {
promJob := fmt.Sprintf("ts_%s_default_test", opts.proxyType)
if !opts.namespaced {
promJob = fmt.Sprintf("ts_%s_test", opts.proxyType)
}
labels := map[string]string{
"tailscale.com/managed": "true",
"tailscale.com/metrics-target": opts.stsName,
"ts_prom_job": promJob,
"ts_proxy_type": opts.proxyType,
"ts_proxy_parent_name": "test",
}
if opts.namespaced {
labels["ts_proxy_parent_namespace"] = "default"
}
return labels
}
func expectedServiceMonitor(t *testing.T, opts configOpts) *unstructured.Unstructured {
t.Helper()
smLabels := metricsLabels(opts)
if len(opts.serviceMonitorLabels) != 0 {
smLabels = mergeMapKeys(smLabels, opts.serviceMonitorLabels.Parse())
}
name := metricsResourceName(opts.stsName)
sm := &ServiceMonitor{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: opts.tailscaleNamespace,
Labels: smLabels,
ResourceVersion: opts.resourceVersion,
OwnerReferences: []metav1.OwnerReference{{APIVersion: "v1", Kind: "Service", Name: name, BlockOwnerDeletion: ptr.To(true), Controller: ptr.To(true)}},
},
TypeMeta: metav1.TypeMeta{
Kind: "ServiceMonitor",
APIVersion: "monitoring.coreos.com/v1",
},
Spec: ServiceMonitorSpec{
Selector: metav1.LabelSelector{MatchLabels: metricsLabels(opts)},
Endpoints: []ServiceMonitorEndpoint{{
Port: "metrics",
}},
NamespaceSelector: ServiceMonitorNamespaceSelector{
MatchNames: []string{opts.tailscaleNamespace},
},
JobLabel: "ts_prom_job",
TargetLabels: []string{
"ts_proxy_parent_name",
"ts_proxy_parent_namespace",
"ts_proxy_type",
},
},
}
u, err := serviceMonitorToUnstructured(sm)
if err != nil {
t.Fatalf("error converting ServiceMonitor to unstructured: %v", err)
}
return u
}
func expectedSecret(t *testing.T, cl client.Client, opts configOpts) *corev1.Secret {
t.Helper()
s := &corev1.Secret{
@@ -348,12 +490,14 @@ func expectedSecret(t *testing.T, cl client.Client, opts configOpts) *corev1.Sec
mak.Set(&s.StringData, "serve-config", string(serveConfigBs))
}
conf := &ipn.ConfigVAlpha{
Version: "alpha0",
AcceptDNS: "false",
Hostname: &opts.hostname,
Locked: "false",
AuthKey: ptr.To("secret-authkey"),
AcceptRoutes: "false",
Version: "alpha0",
AcceptDNS: "false",
Hostname: &opts.hostname,
Locked: "false",
AuthKey: ptr.To("secret-authkey"),
AcceptRoutes: "false",
AppConnector: &ipn.AppConnectorPrefs{Advertise: false},
NoStatefulFiltering: "true",
}
if opts.proxyClass != "" {
t.Logf("applying configuration from ProxyClass %s", opts.proxyClass)
@@ -365,6 +509,12 @@ func expectedSecret(t *testing.T, cl client.Client, opts configOpts) *corev1.Sec
conf.AcceptRoutes = "true"
}
}
if opts.shouldRemoveAuthKey {
conf.AuthKey = nil
}
if opts.isAppConnector {
conf.AppConnector = &ipn.AppConnectorPrefs{Advertise: true}
}
var routes []netip.Prefix
if opts.subnetRoutes != "" || opts.isExitNode {
r := opts.subnetRoutes
@@ -380,21 +530,17 @@ func expectedSecret(t *testing.T, cl client.Client, opts configOpts) *corev1.Sec
}
}
conf.AdvertiseRoutes = routes
b, err := json.Marshal(conf)
bnn, err := json.Marshal(conf)
if err != nil {
t.Fatalf("error marshalling tailscaled config")
}
if opts.tailnetTargetFQDN != "" || opts.tailnetTargetIP != "" {
conf.NoStatefulFiltering = "true"
} else {
conf.NoStatefulFiltering = "false"
}
conf.AppConnector = nil
bn, err := json.Marshal(conf)
if err != nil {
t.Fatalf("error marshalling tailscaled config")
}
mak.Set(&s.StringData, "tailscaled", string(b))
mak.Set(&s.StringData, "cap-95.hujson", string(bn))
mak.Set(&s.StringData, "cap-107.hujson", string(bnn))
labels := map[string]string{
"tailscale.com/managed": "true",
"tailscale.com/parent-resource": "test",
@@ -405,6 +551,9 @@ func expectedSecret(t *testing.T, cl client.Client, opts configOpts) *corev1.Sec
labels["tailscale.com/parent-resource-ns"] = "" // Connector is cluster scoped
}
s.Labels = labels
for key, val := range opts.secretExtraData {
mak.Set(&s.Data, key, val)
}
return s
}
@@ -492,13 +641,29 @@ func expectEqual[T any, O ptrObject[T]](t *testing.T, client client.Client, want
}
}
func expectEqualUnstructured(t *testing.T, client client.Client, want *unstructured.Unstructured) {
t.Helper()
got := &unstructured.Unstructured{}
got.SetGroupVersionKind(want.GroupVersionKind())
if err := client.Get(context.Background(), types.NamespacedName{
Name: want.GetName(),
Namespace: want.GetNamespace(),
}, got); err != nil {
t.Fatalf("getting %q: %v", want.GetName(), err)
}
if diff := cmp.Diff(got, want); diff != "" {
t.Fatalf("unexpected contents of Unstructured (-got +want):\n%s", diff)
}
}
func expectMissing[T any, O ptrObject[T]](t *testing.T, client client.Client, ns, name string) {
t.Helper()
obj := O(new(T))
if err := client.Get(context.Background(), types.NamespacedName{
err := client.Get(context.Background(), types.NamespacedName{
Name: name,
Namespace: ns,
}, obj); !apierrors.IsNotFound(err) {
}, obj)
if !apierrors.IsNotFound(err) {
t.Fatalf("%s %s/%s unexpectedly present, wanted missing", reflect.TypeOf(obj).Elem().Name(), ns, name)
}
}
@@ -596,7 +761,7 @@ func (c *fakeTSClient) CreateKey(ctx context.Context, caps tailscale.KeyCapabili
func (c *fakeTSClient) Device(ctx context.Context, deviceID string, fields *tailscale.DeviceFieldsOpts) (*tailscale.Device, error) {
return &tailscale.Device{
DeviceID: deviceID,
Hostname: "test-device",
Hostname: "hostname-" + deviceID,
Addresses: []string{
"1.2.3.4",
"::1",
@@ -631,21 +796,17 @@ func removeHashAnnotation(sts *appsv1.StatefulSet) {
delete(sts.Spec.Template.Annotations, podAnnotationLastSetConfigFileHash)
}
func removeTargetPortsFromSvc(svc *corev1.Service) {
newPorts := make([]corev1.ServicePort, 0)
for _, p := range svc.Spec.Ports {
newPorts = append(newPorts, corev1.ServicePort{Protocol: p.Protocol, Port: p.Port, Name: p.Name})
}
svc.Spec.Ports = newPorts
}
func removeAuthKeyIfExistsModifier(t *testing.T) func(s *corev1.Secret) {
return func(secret *corev1.Secret) {
t.Helper()
if len(secret.StringData["tailscaled"]) != 0 {
conf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(secret.StringData["tailscaled"]), conf); err != nil {
t.Fatalf("error unmarshalling 'tailscaled' contents: %v", err)
}
conf.AuthKey = nil
b, err := json.Marshal(conf)
if err != nil {
t.Fatalf("error marshalling updated 'tailscaled' config: %v", err)
}
mak.Set(&secret.StringData, "tailscaled", string(b))
}
if len(secret.StringData["cap-95.hujson"]) != 0 {
conf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(secret.StringData["cap-95.hujson"]), conf); err != nil {
@@ -658,5 +819,17 @@ func removeAuthKeyIfExistsModifier(t *testing.T) func(s *corev1.Secret) {
}
mak.Set(&secret.StringData, "cap-95.hujson", string(b))
}
if len(secret.StringData["cap-107.hujson"]) != 0 {
conf := &ipn.ConfigVAlpha{}
if err := json.Unmarshal([]byte(secret.StringData["cap-107.hujson"]), conf); err != nil {
t.Fatalf("error umarshalling 'cap-107.hujson' contents: %v", err)
}
conf.AuthKey = nil
b, err := json.Marshal(conf)
if err != nil {
t.Fatalf("error marshalling 'cap-107.huson' contents: %v", err)
}
mak.Set(&secret.StringData, "cap-107.hujson", string(b))
}
}
}

View File

@@ -11,6 +11,7 @@ import (
"fmt"
"net/http"
"slices"
"strings"
"sync"
"github.com/pkg/errors"
@@ -38,6 +39,7 @@ import (
const (
reasonRecorderCreationFailed = "RecorderCreationFailed"
reasonRecorderCreating = "RecorderCreating"
reasonRecorderCreated = "RecorderCreated"
reasonRecorderInvalid = "RecorderInvalid"
@@ -102,7 +104,7 @@ func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Reques
oldTSRStatus := tsr.Status.DeepCopy()
setStatusReady := func(tsr *tsapi.Recorder, status metav1.ConditionStatus, reason, message string) (reconcile.Result, error) {
tsoperator.SetRecorderCondition(tsr, tsapi.RecorderReady, status, reason, message, tsr.Generation, r.clock, logger)
if !apiequality.Semantic.DeepEqual(oldTSRStatus, tsr.Status) {
if !apiequality.Semantic.DeepEqual(oldTSRStatus, &tsr.Status) {
// An error encountered here should get returned by the Reconcile function.
if updateErr := r.Client.Status().Update(ctx, tsr); updateErr != nil {
err = errors.Wrap(err, updateErr.Error())
@@ -119,23 +121,28 @@ func (r *RecorderReconciler) Reconcile(ctx context.Context, req reconcile.Reques
logger.Infof("ensuring Recorder is set up")
tsr.Finalizers = append(tsr.Finalizers, FinalizerName)
if err := r.Update(ctx, tsr); err != nil {
logger.Errorf("error adding finalizer: %w", err)
return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderCreationFailed, reasonRecorderCreationFailed)
}
}
if err := r.validate(tsr); err != nil {
logger.Errorf("error validating Recorder spec: %w", err)
message := fmt.Sprintf("Recorder is invalid: %s", err)
r.recorder.Eventf(tsr, corev1.EventTypeWarning, reasonRecorderInvalid, message)
return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderInvalid, message)
}
if err = r.maybeProvision(ctx, tsr); err != nil {
logger.Errorf("error creating Recorder resources: %w", err)
reason := reasonRecorderCreationFailed
message := fmt.Sprintf("failed creating Recorder: %s", err)
r.recorder.Eventf(tsr, corev1.EventTypeWarning, reasonRecorderCreationFailed, message)
return setStatusReady(tsr, metav1.ConditionFalse, reasonRecorderCreationFailed, message)
if strings.Contains(err.Error(), optimisticLockErrorMsg) {
reason = reasonRecorderCreating
message = fmt.Sprintf("optimistic lock error, retrying: %s", err)
err = nil
logger.Info(message)
} else {
r.recorder.Eventf(tsr, corev1.EventTypeWarning, reasonRecorderCreationFailed, message)
}
return setStatusReady(tsr, metav1.ConditionFalse, reason, message)
}
logger.Info("Recorder resources synced")
@@ -199,7 +206,7 @@ func (r *RecorderReconciler) maybeProvision(ctx context.Context, tsr *tsapi.Reco
return fmt.Errorf("error creating StatefulSet: %w", err)
}
var devices []tsapi.TailnetDevice
var devices []tsapi.RecorderTailnetDevice
device, ok, err := r.getDeviceInfo(ctx, tsr.Name)
if err != nil {
@@ -302,9 +309,7 @@ func (r *RecorderReconciler) validate(tsr *tsapi.Recorder) error {
return nil
}
// getNodeMetadata returns 'ok == true' iff the node ID is found. The dnsName
// is expected to always be non-empty if the node ID is, but not required.
func (r *RecorderReconciler) getNodeMetadata(ctx context.Context, tsrName string) (id tailcfg.StableNodeID, dnsName string, ok bool, err error) {
func (r *RecorderReconciler) getStateSecret(ctx context.Context, tsrName string) (*corev1.Secret, error) {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: r.tsNamespace,
@@ -313,12 +318,27 @@ func (r *RecorderReconciler) getNodeMetadata(ctx context.Context, tsrName string
}
if err := r.Get(ctx, client.ObjectKeyFromObject(secret), secret); err != nil {
if apierrors.IsNotFound(err) {
return "", "", false, nil
return nil, nil
}
return nil, fmt.Errorf("error getting state Secret: %w", err)
}
return secret, nil
}
func (r *RecorderReconciler) getNodeMetadata(ctx context.Context, tsrName string) (id tailcfg.StableNodeID, dnsName string, ok bool, err error) {
secret, err := r.getStateSecret(ctx, tsrName)
if err != nil || secret == nil {
return "", "", false, err
}
return getNodeMetadata(ctx, secret)
}
// getNodeMetadata returns 'ok == true' iff the node ID is found. The dnsName
// is expected to always be non-empty if the node ID is, but not required.
func getNodeMetadata(ctx context.Context, secret *corev1.Secret) (id tailcfg.StableNodeID, dnsName string, ok bool, err error) {
// TODO(tomhjp): Should maybe use ipn to parse the following info instead.
currentProfile, ok := secret.Data[currentProfileKey]
if !ok {
@@ -337,20 +357,29 @@ func (r *RecorderReconciler) getNodeMetadata(ctx context.Context, tsrName string
return tailcfg.StableNodeID(profile.Config.NodeID), profile.Config.UserProfile.LoginName, ok, nil
}
func (r *RecorderReconciler) getDeviceInfo(ctx context.Context, tsrName string) (d tsapi.TailnetDevice, ok bool, err error) {
nodeID, dnsName, ok, err := r.getNodeMetadata(ctx, tsrName)
func (r *RecorderReconciler) getDeviceInfo(ctx context.Context, tsrName string) (d tsapi.RecorderTailnetDevice, ok bool, err error) {
secret, err := r.getStateSecret(ctx, tsrName)
if err != nil || secret == nil {
return tsapi.RecorderTailnetDevice{}, false, err
}
return getDeviceInfo(ctx, r.tsClient, secret)
}
func getDeviceInfo(ctx context.Context, tsClient tsClient, secret *corev1.Secret) (d tsapi.RecorderTailnetDevice, ok bool, err error) {
nodeID, dnsName, ok, err := getNodeMetadata(ctx, secret)
if !ok || err != nil {
return tsapi.TailnetDevice{}, false, err
return tsapi.RecorderTailnetDevice{}, false, err
}
// TODO(tomhjp): The profile info doesn't include addresses, which is why we
// need the API. Should we instead update the profile to include addresses?
device, err := r.tsClient.Device(ctx, string(nodeID), nil)
device, err := tsClient.Device(ctx, string(nodeID), nil)
if err != nil {
return tsapi.TailnetDevice{}, false, fmt.Errorf("failed to get device info from API: %w", err)
return tsapi.RecorderTailnetDevice{}, false, fmt.Errorf("failed to get device info from API: %w", err)
}
d = tsapi.TailnetDevice{
d = tsapi.RecorderTailnetDevice{
Hostname: device.Hostname,
TailnetIPs: device.Addresses,
}
@@ -370,6 +399,6 @@ type profile struct {
} `json:"Config"`
}
func markedForDeletion(tsr *tsapi.Recorder) bool {
return !tsr.DeletionTimestamp.IsZero()
func markedForDeletion(obj metav1.Object) bool {
return !obj.GetDeletionTimestamp().IsZero()
}

View File

@@ -130,6 +130,15 @@ func tsrRole(tsr *tsapi.Recorder, namespace string) *rbacv1.Role {
fmt.Sprintf("%s-0", tsr.Name), // Contains the node state.
},
},
{
APIGroups: []string{""},
Resources: []string{"events"},
Verbs: []string{
"get",
"create",
"patch",
},
},
},
}
}
@@ -203,6 +212,14 @@ func env(tsr *tsapi.Recorder) []corev1.EnvVar {
},
},
},
{
Name: "POD_UID",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.uid",
},
},
},
{
Name: "TS_STATE",
Value: "kube:$(POD_NAME)",

View File

@@ -105,9 +105,9 @@ func TestRecorder(t *testing.T) {
})
expectReconciled(t, reconciler, "", tsr.Name)
tsr.Status.Devices = []tsapi.TailnetDevice{
tsr.Status.Devices = []tsapi.RecorderTailnetDevice{
{
Hostname: "test-device",
Hostname: "hostname-nodeid-123",
TailnetIPs: []string{"1.2.3.4", "::1"},
URL: "https://test-0.example.ts.net",
},

View File

@@ -5,24 +5,40 @@
package main
import (
"flag"
"log"
"net"
"os"
"strconv"
"time"
"tailscale.com/net/stun"
)
func main() {
log.SetFlags(0)
if len(os.Args) < 2 || len(os.Args) > 3 {
log.Fatalf("usage: %s <hostname> [port]", os.Args[0])
}
host := os.Args[1]
var host string
port := "3478"
if len(os.Args) == 3 {
port = os.Args[2]
var readTimeout time.Duration
flag.DurationVar(&readTimeout, "timeout", 3*time.Second, "response wait timeout")
flag.Parse()
values := flag.Args()
if len(values) < 1 || len(values) > 2 {
log.Printf("usage: %s <hostname> [port]", os.Args[0])
flag.PrintDefaults()
os.Exit(1)
} else {
for i, value := range values {
switch i {
case 0:
host = value
case 1:
port = value
}
}
}
_, err := strconv.ParseUint(port, 10, 16)
if err != nil {
@@ -46,6 +62,10 @@ func main() {
log.Fatal(err)
}
err = c.SetReadDeadline(time.Now().Add(readTimeout))
if err != nil {
log.Fatal(err)
}
var buf [1024]byte
n, raddr, err := c.ReadFromUDPAddrPort(buf[:])
if err != nil {

View File

@@ -8,7 +8,6 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
github.com/go-json-experiment/json/internal/jsonopts from github.com/go-json-experiment/json+
github.com/go-json-experiment/json/internal/jsonwire from github.com/go-json-experiment/json+
github.com/go-json-experiment/json/jsontext from github.com/go-json-experiment/json+
github.com/google/uuid from tailscale.com/util/fastuuid
💣 github.com/prometheus/client_golang/prometheus from tailscale.com/tsweb/promvarz
github.com/prometheus/client_golang/prometheus/internal from github.com/prometheus/client_golang/prometheus
github.com/prometheus/client_model/go from github.com/prometheus/client_golang/prometheus+
@@ -56,6 +55,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
tailscale.com/net/stun from tailscale.com/net/stunserver
tailscale.com/net/stunserver from tailscale.com/cmd/stund
tailscale.com/net/tsaddr from tailscale.com/tsweb
tailscale.com/syncs from tailscale.com/metrics
tailscale.com/tailcfg from tailscale.com/version
tailscale.com/tsweb from tailscale.com/cmd/stund
tailscale.com/tsweb/promvarz from tailscale.com/tsweb
@@ -67,15 +67,17 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
tailscale.com/types/logger from tailscale.com/tsweb
tailscale.com/types/opt from tailscale.com/envknob+
tailscale.com/types/ptr from tailscale.com/tailcfg+
tailscale.com/types/result from tailscale.com/util/lineiter
tailscale.com/types/structs from tailscale.com/tailcfg+
tailscale.com/types/tkatype from tailscale.com/tailcfg+
tailscale.com/types/views from tailscale.com/net/tsaddr+
tailscale.com/util/ctxkey from tailscale.com/tsweb+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/tailcfg
tailscale.com/util/fastuuid from tailscale.com/tsweb
tailscale.com/util/lineread from tailscale.com/version/distro
tailscale.com/util/lineiter from tailscale.com/version/distro
tailscale.com/util/mak from tailscale.com/syncs
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/rands from tailscale.com/tsweb
tailscale.com/util/slicesx from tailscale.com/tailcfg
tailscale.com/util/vizerror from tailscale.com/tailcfg+
tailscale.com/version from tailscale.com/envknob+
@@ -132,7 +134,6 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
crypto/tls from net/http+
crypto/x509 from crypto/tls
crypto/x509/pkix from crypto/x509
database/sql/driver from github.com/google/uuid
embed from crypto/internal/nistec+
encoding from encoding/json+
encoding/asn1 from crypto/x509+
@@ -163,7 +164,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
math/big from crypto/dsa+
math/bits from compress/flate+
math/rand from math/big+
math/rand/v2 from tailscale.com/util/fastuuid+
math/rand/v2 from internal/concurrent+
mime from github.com/prometheus/common/expfmt+
mime/multipart from net/http
mime/quotedprintable from mime/multipart

View File

@@ -1,220 +0,0 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build cgo || !darwin
package main
import (
"bytes"
"context"
"image/color"
"image/png"
"sync"
"time"
"fyne.io/systray"
"github.com/fogleman/gg"
)
// tsLogo represents the state of the 3x3 dot grid in the Tailscale logo.
// A 0 represents a gray dot, any other value is a white dot.
type tsLogo [9]byte
var (
// disconnected is all gray dots
disconnected = tsLogo{
0, 0, 0,
0, 0, 0,
0, 0, 0,
}
// connected is the normal Tailscale logo
connected = tsLogo{
0, 0, 0,
1, 1, 1,
0, 1, 0,
}
// loading is a special tsLogo value that is not meant to be rendered directly,
// but indicates that the loading animation should be shown.
loading = tsLogo{'l', 'o', 'a', 'd', 'i', 'n', 'g'}
// loadingIcons are shown in sequence as an animated loading icon.
loadingLogos = []tsLogo{
{
0, 1, 1,
1, 0, 1,
0, 0, 1,
},
{
0, 1, 1,
0, 0, 1,
0, 1, 0,
},
{
0, 1, 1,
0, 0, 0,
0, 0, 1,
},
{
0, 0, 1,
0, 1, 0,
0, 0, 0,
},
{
0, 1, 0,
0, 0, 0,
0, 0, 0,
},
{
0, 0, 0,
0, 0, 1,
0, 0, 0,
},
{
0, 0, 0,
0, 0, 0,
0, 0, 0,
},
{
0, 0, 1,
0, 0, 0,
0, 0, 0,
},
{
0, 0, 0,
0, 0, 0,
1, 0, 0,
},
{
0, 0, 0,
0, 0, 0,
1, 1, 0,
},
{
0, 0, 0,
1, 0, 0,
1, 1, 0,
},
{
0, 0, 0,
1, 1, 0,
0, 1, 0,
},
{
0, 0, 0,
1, 1, 0,
0, 1, 1,
},
{
0, 0, 0,
1, 1, 1,
0, 0, 1,
},
{
0, 1, 0,
0, 1, 1,
1, 0, 1,
},
}
)
var (
black = color.NRGBA{0, 0, 0, 255}
white = color.NRGBA{255, 255, 255, 255}
gray = color.NRGBA{255, 255, 255, 102}
)
// render returns a PNG image of the logo.
func (logo tsLogo) render() *bytes.Buffer {
const radius = 25
const borderUnits = 1
dim := radius * (8 + borderUnits*2)
dc := gg.NewContext(dim, dim)
dc.DrawRectangle(0, 0, float64(dim), float64(dim))
dc.SetColor(black)
dc.Fill()
for y := 0; y < 3; y++ {
for x := 0; x < 3; x++ {
px := (borderUnits + 1 + 3*x) * radius
py := (borderUnits + 1 + 3*y) * radius
col := white
if logo[y*3+x] == 0 {
col = gray
}
dc.DrawCircle(float64(px), float64(py), radius)
dc.SetColor(col)
dc.Fill()
}
}
b := bytes.NewBuffer(nil)
png.Encode(b, dc.Image())
return b
}
// setAppIcon renders logo and sets it as the systray icon.
func setAppIcon(icon tsLogo) {
if icon == loading {
startLoadingAnimation()
} else {
stopLoadingAnimation()
systray.SetIcon(icon.render().Bytes())
}
}
var (
loadingMu sync.Mutex // protects loadingCancel
// loadingCancel stops the loading animation in the systray icon.
// This is nil if the animation is not currently active.
loadingCancel func()
)
// startLoadingAnimation starts the animated loading icon in the system tray.
// The animation continues until [stopLoadingAnimation] is called.
// If the loading animation is already active, this func does nothing.
func startLoadingAnimation() {
loadingMu.Lock()
defer loadingMu.Unlock()
if loadingCancel != nil {
// loading icon already displayed
return
}
ctx := context.Background()
ctx, loadingCancel = context.WithCancel(ctx)
go func() {
t := time.NewTicker(500 * time.Millisecond)
var i int
for {
select {
case <-ctx.Done():
return
case <-t.C:
systray.SetIcon(loadingLogos[i].render().Bytes())
i++
if i >= len(loadingLogos) {
i = 0
}
}
}
}()
}
// stopLoadingAnimation stops the animated loading icon in the system tray.
// If the loading animation is not currently active, this func does nothing.
func stopLoadingAnimation() {
loadingMu.Lock()
defer loadingMu.Unlock()
if loadingCancel != nil {
loadingCancel()
loadingCancel = nil
}
}

View File

@@ -3,256 +3,13 @@
//go:build cgo || !darwin
// The systray command is a minimal Tailscale systray application for Linux.
// systray is a minimal Tailscale systray application.
package main
import (
"context"
"errors"
"fmt"
"io"
"log"
"os"
"strings"
"sync"
"time"
"fyne.io/systray"
"github.com/atotto/clipboard"
dbus "github.com/godbus/dbus/v5"
"github.com/toqueteos/webbrowser"
"tailscale.com/client/tailscale"
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
)
var (
localClient tailscale.LocalClient
chState chan ipn.State // tailscale state changes
appIcon *os.File
"tailscale.com/client/systray"
)
func main() {
systray.Run(onReady, onExit)
}
// Menu represents the systray menu, its items, and the current Tailscale state.
type Menu struct {
mu sync.Mutex // protects the entire Menu
status *ipnstate.Status
connect *systray.MenuItem
disconnect *systray.MenuItem
self *systray.MenuItem
more *systray.MenuItem
quit *systray.MenuItem
eventCancel func() // cancel eventLoop
}
func onReady() {
log.Printf("starting")
ctx := context.Background()
setAppIcon(disconnected)
// dbus wants a file path for notification icons, so copy to a temp file.
appIcon, _ = os.CreateTemp("", "tailscale-systray.png")
io.Copy(appIcon, connected.render())
chState = make(chan ipn.State, 1)
status, err := localClient.Status(ctx)
if err != nil {
log.Print(err)
}
menu := new(Menu)
menu.rebuild(status)
go watchIPNBus(ctx)
}
// rebuild the systray menu based on the current Tailscale state.
//
// We currently rebuild the entire menu because it is not easy to update the existing menu.
// You cannot iterate over the items in a menu, nor can you remove some items like separators.
// So for now we rebuild the whole thing, and can optimize this later if needed.
func (menu *Menu) rebuild(status *ipnstate.Status) {
menu.mu.Lock()
defer menu.mu.Unlock()
if menu.eventCancel != nil {
menu.eventCancel()
}
menu.status = status
systray.ResetMenu()
menu.connect = systray.AddMenuItem("Connect", "")
menu.disconnect = systray.AddMenuItem("Disconnect", "")
menu.disconnect.Hide()
systray.AddSeparator()
if status != nil && status.Self != nil {
title := fmt.Sprintf("This Device: %s (%s)", status.Self.HostName, status.Self.TailscaleIPs[0])
menu.self = systray.AddMenuItem(title, "")
}
systray.AddSeparator()
menu.more = systray.AddMenuItem("More settings", "")
menu.more.Enable()
menu.quit = systray.AddMenuItem("Quit", "Quit the app")
menu.quit.Enable()
ctx := context.Background()
ctx, menu.eventCancel = context.WithCancel(ctx)
go menu.eventLoop(ctx)
}
// eventLoop is the main event loop for handling click events on menu items
// and responding to Tailscale state changes.
// This method does not return until ctx.Done is closed.
func (menu *Menu) eventLoop(ctx context.Context) {
for {
select {
case <-ctx.Done():
return
case state := <-chState:
switch state {
case ipn.Running:
setAppIcon(loading)
status, err := localClient.Status(ctx)
if err != nil {
log.Printf("error getting tailscale status: %v", err)
}
menu.rebuild(status)
setAppIcon(connected)
menu.connect.SetTitle("Connected")
menu.connect.Disable()
menu.disconnect.Show()
menu.disconnect.Enable()
case ipn.NoState, ipn.Stopped:
menu.connect.SetTitle("Connect")
menu.connect.Enable()
menu.disconnect.Hide()
setAppIcon(disconnected)
case ipn.Starting:
setAppIcon(loading)
}
case <-menu.connect.ClickedCh:
_, err := localClient.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
WantRunning: true,
},
WantRunningSet: true,
})
if err != nil {
log.Print(err)
continue
}
case <-menu.disconnect.ClickedCh:
_, err := localClient.EditPrefs(ctx, &ipn.MaskedPrefs{
Prefs: ipn.Prefs{
WantRunning: false,
},
WantRunningSet: true,
})
if err != nil {
log.Printf("disconnecting: %v", err)
continue
}
case <-menu.self.ClickedCh:
copyTailscaleIP(menu.status.Self)
case <-menu.more.ClickedCh:
webbrowser.Open("http://100.100.100.100/")
case <-menu.quit.ClickedCh:
systray.Quit()
}
}
}
// watchIPNBus subscribes to the tailscale event bus and sends state updates to chState.
// This method does not return.
func watchIPNBus(ctx context.Context) {
for {
if err := watchIPNBusInner(ctx); err != nil {
log.Println(err)
if errors.Is(err, context.Canceled) {
// If the context got canceled, we will never be able to
// reconnect to IPN bus, so exit the process.
log.Fatalf("watchIPNBus: %v", err)
}
}
// If our watch connection breaks, wait a bit before reconnecting. No
// reason to spam the logs if e.g. tailscaled is restarting or goes
// down.
time.Sleep(3 * time.Second)
}
}
func watchIPNBusInner(ctx context.Context) error {
watcher, err := localClient.WatchIPNBus(ctx, ipn.NotifyInitialState|ipn.NotifyNoPrivateKeys)
if err != nil {
return fmt.Errorf("watching ipn bus: %w", err)
}
defer watcher.Close()
for {
select {
case <-ctx.Done():
return nil
default:
n, err := watcher.Next()
if err != nil {
return fmt.Errorf("ipnbus error: %w", err)
}
if n.State != nil {
chState <- *n.State
log.Printf("new state: %v", n.State)
}
}
}
}
// copyTailscaleIP copies the first Tailscale IP of the given device to the clipboard
// and sends a notification with the copied value.
func copyTailscaleIP(device *ipnstate.PeerStatus) {
if device == nil || len(device.TailscaleIPs) == 0 {
return
}
name := strings.Split(device.DNSName, ".")[0]
ip := device.TailscaleIPs[0].String()
err := clipboard.WriteAll(ip)
if err != nil {
log.Printf("clipboard error: %v", err)
}
sendNotification(fmt.Sprintf("Copied Address for %v", name), ip)
}
// sendNotification sends a desktop notification with the given title and content.
func sendNotification(title, content string) {
conn, err := dbus.SessionBus()
if err != nil {
log.Printf("dbus: %v", err)
return
}
timeout := 3 * time.Second
obj := conn.Object("org.freedesktop.Notifications", "/org/freedesktop/Notifications")
call := obj.Call("org.freedesktop.Notifications.Notify", 0, "Tailscale", uint32(0),
appIcon.Name(), title, content, []string{}, map[string]dbus.Variant{}, int32(timeout.Milliseconds()))
if call.Err != nil {
log.Printf("dbus: %v", call.Err)
}
}
func onExit() {
log.Printf("exiting")
os.Remove(appIcon.Name())
new(systray.Menu).Run()
}

View File

@@ -0,0 +1,78 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package cli
import (
"context"
"flag"
"fmt"
"strings"
"github.com/peterbourgon/ff/v3/ffcli"
"tailscale.com/envknob"
"tailscale.com/ipn"
"tailscale.com/tailcfg"
)
var advertiseArgs struct {
services string // comma-separated list of services to advertise
}
// TODO(naman): This flag may move to set.go or serve_v2.go after the WIPCode
// envknob is not needed.
var advertiseCmd = &ffcli.Command{
Name: "advertise",
ShortUsage: "tailscale advertise --services=<services>",
ShortHelp: "Advertise this node as a destination for a service",
Exec: runAdvertise,
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("advertise")
fs.StringVar(&advertiseArgs.services, "services", "", "comma-separated services to advertise; each must start with \"svc:\" (e.g. \"svc:idp,svc:nas,svc:database\")")
return fs
})(),
}
func maybeAdvertiseCmd() []*ffcli.Command {
if !envknob.UseWIPCode() {
return nil
}
return []*ffcli.Command{advertiseCmd}
}
func runAdvertise(ctx context.Context, args []string) error {
if len(args) > 0 {
return flag.ErrHelp
}
services, err := parseServiceNames(advertiseArgs.services)
if err != nil {
return err
}
_, err = localClient.EditPrefs(ctx, &ipn.MaskedPrefs{
AdvertiseServicesSet: true,
Prefs: ipn.Prefs{
AdvertiseServices: services,
},
})
return err
}
// parseServiceNames takes a comma-separated list of service names
// (eg. "svc:hello,svc:webserver,svc:catphotos"), splits them into
// a list and validates each service name. If valid, it returns
// the service names in a slice of strings.
func parseServiceNames(servicesArg string) ([]string, error) {
var services []string
if servicesArg != "" {
services = strings.Split(servicesArg, ",")
for _, svc := range services {
err := tailcfg.CheckServiceName(svc)
if err != nil {
return nil, fmt.Errorf("service %q: %s", svc, err)
}
}
}
return services, nil
}

View File

@@ -93,8 +93,13 @@ func Run(args []string) (err error) {
args = CleanUpArgs(args)
if len(args) == 1 && (args[0] == "-V" || args[0] == "--version") {
args = []string{"version"}
if len(args) == 1 {
switch args[0] {
case "-V", "--version":
args = []string{"version"}
case "help":
args = []string{"--help"}
}
}
var warnOnce sync.Once
@@ -177,7 +182,7 @@ For help on subcommands, add --help after: "tailscale status --help".
This CLI is still under active development. Commands and flags will
change in the future.
`),
Subcommands: []*ffcli.Command{
Subcommands: append([]*ffcli.Command{
upCmd,
downCmd,
setCmd,
@@ -185,10 +190,12 @@ change in the future.
logoutCmd,
switchCmd,
configureCmd,
syspolicyCmd,
netcheckCmd,
ipCmd,
dnsCmd,
statusCmd,
metricsCmd,
pingCmd,
ncCmd,
sshCmd,
@@ -207,7 +214,7 @@ change in the future.
debugCmd,
driveCmd,
idTokenCmd,
},
}, maybeAdvertiseCmd()...),
FlagSet: rootfs,
Exec: func(ctx context.Context, args []string) error {
if len(args) > 0 {

Some files were not shown because too many files have changed in this diff Show More