Compare commits

...

56 Commits

Author SHA1 Message Date
M. J. Fromberger
0efeb032e6 wgengine/magicsock: subscribe to portmapper updates
When an event bus is plumbed in, use it to subscribe and react to port mapping
updates instead of using the client's callback mechanism. For now, the callback
remains available as a fallback when an event bus is not provided.

Updates #15160

Change-Id: I026adca44bf6187692ee87ae8ec02641c12f7774
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
David Anderson
d7e92345eb net/netmon: publish events to event bus
Updates #15160

Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-26 08:50:51 -07:00
David Anderson
39449b12ec derp/derphttp: remove ban on websockets dependency
The event bus's debug page uses websockets.

Updates #15160

Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-26 08:50:51 -07:00
David Anderson
f6547cd990 cmd/tailscaled: clean up unnecessary logf indirection #cleanup
Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
cf230a8362 net/portmapper: fire an event when a port mapping is updated (#15371)
When an event bus is configured publish an event each time a new port mapping
is updated. Publication is unconditional and occurs prior to calling any
callback that is registered. For now, the callback is still fired in a separate
goroutine as before -- later, those callbacks should become subscriptions to
the published event.

For now, the event type is defined as a new type here in the package. We will
want to move it to a more central package when there are subscribers. The event
wrapper is effectively a subset of the data exported by the internal mapping
interface, but on a concrete struct so the bus plumbing can inspect it.

Updates #15160

Change-Id: I951f212429ac791223af8d75b6eb39a0d2a0053a
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
039605e500 all: update the tsd.System constructor name (#15372)
Replace NewSystemWithEventBus with plain NewSystem, and update all usage.
See https://github.com/tailscale/tailscale/pull/15355#discussion_r2003910766

Updates #15160

Change-Id: I64d337f09576b41d9ad78eba301a74b9a9d6ebf4
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
8d6ff1c66a {wgengine,util/portmapper}: add and plumb an event bus (#15359)
Updates #15160

Change-Id: I2510fb4a8905fb0abe8a8e0c5b81adb15d50a6f8
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
c654c5bd7a portmapper: update NewClient to use a Config argument
In preparation for adding more parameters (and later, moving some away), rework
the portmapper constructor to accept its arguments on a Config struct rather
than positionally.

This is a breaking change to the function signature, but one that is very easy
to update, and a search of GitHub reveals only six instances of usage outside
clones and forks of Tailscale itself, that are not direct copies of the code
fixed up here.

While we could stub in another constructor, I think it is safe to let those
folks do the update in-place, since their usage is already affected by other
changes we can't test for anyway.

Updates #15160

Change-Id: I9f8a5e12b38885074c98894b7376039261b43f43
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
981d721d20 wgengine: plumb an event bus into the userspace engine
Updates #15160

Change-Id: Ia695ccdddd09cd950de22abd000d4c531d6bf3c8
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
a612b2de5a tsnet: shut down the event bus on Close
Updates #15160

Change-Id: I29c8194b4b41e95848e5f160e9970db352588449
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
M. J. Fromberger
f5ddc0d6c3 all: construct new System values with an event bus pre-populated
Although, at the moment, we do not yet require an event bus to be present, as
we start to add more pieces we will want to ensure it is always available.  Add
a new constructor and replace existing uses of new(tsd.System) throughout.
Update generated files for import changes.

Updates #15160

Change-Id: Ie5460985571ade87b8eac8b416948c7f49f0f64b
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
2025-03-26 08:50:51 -07:00
David Anderson
cf85d4a3b1 tsd: wire up the event bus to tailscaled
Updates #15160

Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-26 08:50:51 -07:00
David Anderson
b2b1737f86 tsweb: don't hook up pprof handlers in javascript builds
Updates #15160

Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-26 08:50:51 -07:00
Irbe Krumina
fea74a60d5 cmd/k8s-operator,k8s-operator: disable HA Ingress before stable release (#15433)
Temporarily make sure that the HA Ingress reconciler does not run,
as we do not want to release this to stable just yet.

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-26 13:29:38 +00:00
Irbe Krumina
e3c04c5d6c build_docker.sh: bump default base image (#15432)
We now have a tailscale/alpine-base:3.19 use that as the default base image.

Updates tailscale/tailscale#15328

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-26 11:58:26 +00:00
James Tucker
d0e7af3830 cmd/natc: add test and fix for ip exhaustion
This is a very dumb fix as it has an unbounded worst case runtime. IP
allocation needs to be done in a more sane way in a follow-up.

Updates #15367

Signed-off-by: James Tucker <james@tailscale.com>
2025-03-25 19:16:02 -07:00
Irbe Krumina
2685484f26 Bump Alpine, link iptables back to legacy (#15428)
Bumps Alpine 3.18 -> 3.19.

Alpine 3.19 links iptables to nftables-based
implementation that can break hosts that don't
support nftables.
Link iptables back to the legacy implementation
till we have some certainty that changing to
nftables based implementation will not break existing
setups.

Updates tailscale/tailscale#15328

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-26 01:48:01 +00:00
Irbe Krumina
a622debe9b cmd/{k8s-operator,containerboot}: check TLS cert before advertising VIPService (#15427)
cmd/{k8s-operator,containerboot}: check TLS cert before advertising VIPService

- Ensures that Ingress status does not advertise port 443 before
TLS cert has been issued
- Ensure that Ingress backends do not advertise a VIPService
before TLS cert has been issued, unless the service also
exposes port 80

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-26 01:32:13 +00:00
Irbe Krumina
4777cc2cda ipn/store/kubestore: skip cache for the write replica in cert share mode (#15417)
ipn/store/kubestore: skip cache for the write replica in cert share mode

This is to avoid issues where stale cache after Ingress recreation
causes the certs not to be re-issued.

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-25 23:25:29 +00:00
James Nugent
75373896c7 tsnet: Default executable name on iOS
When compiled into TailscaleKit.framework (via the libtailscale
repository), os.Executable() returns an error instead of the name of the
executable. This commit adds another branch to the switch statement that
enumerates platforms which behave in this manner, and defaults to
"tsnet" in the same manner as those other platforms.

Fixes #15410.

Signed-off-by: James Nugent <james@jen20.com>
2025-03-25 15:28:35 -07:00
Brad Fitzpatrick
5aa1c27aad control/controlhttp: quiet "forcing port 443" log spam
Minimal mitigation that doesn't do the full refactor that's probably
warranted.

Updates #15402

Change-Id: I79fd91de0e0661d25398f7d95563982ed1d11561
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-25 14:26:24 -07:00
Jonathan Nobels
725c8d298a ipn/ipnlocal: remove misleading [unexpected] log for auditlog (#15421)
fixes tailscale/tailscale#15394

In the current iteration, usage of the memstore for the audit
logger is expected on some platforms.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2025-03-25 15:05:50 -04:00
Mike O'Driscoll
08c8ccb48e prober: add address family label for udp metrics (#15413)
Add a label which differentiates the address family
for STUN checks.

Also initialize the derpprobe_attempts_total and
derpprobe_seconds_total metrics by adding 0 for
the alternate fail/ok case.

Updates tailscale/corp#27249

Signed-off-by: Mike O'Driscoll <mikeo@tailscale.com>
2025-03-25 12:49:54 -04:00
Percy Wegmann
e78055eb01 ipn/ipnlocal: add more logging for initializing peerAPIListeners
On Windows and Android, peerAPIListeners may be initialized after a link change.
This commit adds log statements to make it easier to trace this flow.

Updates #14393

Signed-off-by: Percy Wegmann <percy@tailscale.com>
2025-03-25 06:56:50 -05:00
James Sanderson
ea79dc161d tstest/integration/testcontrol: fix AddRawMapResponse race condition
Only send a stored raw map message in reply to a streaming map response.
Otherwise a non-streaming map response might pick it up first, and
potentially drop it. This guarantees that a map response sent via
AddRawMapResponse will be picked up by the main map response loop in the
client.

Fixes #15362

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2025-03-25 10:39:54 +00:00
James Tucker
b3455fa99a cmd/natc: add some initial unit test coverage
These tests aren't perfect, nor is this complete coverage, but this is a
set of coverage that is at least stable.

Updates #15367

Signed-off-by: James Tucker <james@tailscale.com>
2025-03-24 15:08:28 -07:00
Brad Fitzpatrick
14db99241f net/netmon: use Monitor's tsIfName if set by SetTailscaleInterfaceName
Currently nobody calls SetTailscaleInterfaceName yet, so this is a
no-op. I checked oss, android, and the macOS/iOS client. Nobody calls
this, or ever did.

But I want to in the future.

Updates #15408
Updates #9040

Change-Id: I05dfabe505174f9067b929e91c6e0d8bc42628d7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-24 13:34:02 -07:00
Brad Fitzpatrick
156cd53e77 net/netmon: unexport GetState
Baby step towards #15408.

Updates #15408

Change-Id: I11fca6e677af2ad2f065d83aa0d83550143bff29
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-24 10:43:15 -07:00
Brad Fitzpatrick
5c0e08fbbd tstest/mts: add multiple-tailscaled development tool
To let you easily run multiple tailscaled instances for development
and let you route CLI commands to the right one.

Updates #15145

Change-Id: I06b6a7bf024f341c204f30705b4c3068ac89b1a2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-24 10:10:35 -07:00
Brad Fitzpatrick
d0c50c6072 clientupdate: cache CanAutoUpdate, avoid log spam when false
I noticed logs on one of my machines where it can't auto-update with
scary log spam about "failed to apply tailnet-wide default for
auto-updates".

This avoids trying to do the EditPrefs if we know it's just going to
fail anyway.

Updates #282

Change-Id: Ib7db3b122185faa70efe08b60ebd05a6094eed8c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-24 09:46:48 -07:00
Simon Law
6bbf98bef4 all: skip looking for package comments in .git/ repository (#15384) 2025-03-21 14:46:02 -07:00
Brad Fitzpatrick
e1078686b3 safesocket: respect context timeout when sleeping for 250ms in retry loop
Noticed while working on a dev tool that uses local.Client.

Updates #cleanup

Change-Id: I981efff74a5cac5f515755913668bd0508a4aa14
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-21 10:55:32 -07:00
James Sanderson
c261fb198f tstest: make it clearer where AwaitRunning failed and why
Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2025-03-21 13:09:46 +00:00
James Sanderson
5668de272c tsnet: use test logger for testcontrol and node logs
Updates #cleanup

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2025-03-21 12:33:36 +00:00
Tom Proctor
005e20a45e cmd/k8s-operator,internal/client/tailscale: use VIPService annotations for ownership tracking (#15356)
Switch from using the Comment field to a ts-scoped annotation for
tracking which operators are cooperating over ownership of a
VIPService.

Updates tailscale/corp#24795

Change-Id: I72d4a48685f85c0329aa068dc01a1a3c749017bf
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2025-03-21 09:08:39 +00:00
Irbe Krumina
196ae1cd74 cmd/k8s-operator,k8s-operator: allow optionally using LE staging endpoint for Ingress (#15360)
cmd/k8s-operator,k8s-operator: allow using LE staging endpoint for Ingress

Allow to optionally use LetsEncrypt staging endpoint to issue
certs for Ingress/HA Ingress, so that it is easier to
experiment with initial Ingress setup without hiting rate limits.

Updates tailscale/corp#24795


Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-21 08:53:41 +00:00
Nick Khyl
f3f2f72f96 ipn/ipnlocal: do not attempt to start the auditlogger with a nil transport
(*LocalBackend).setControlClientLocked() is called to both set and reset b.cc.
We shouldn't attempt to start the audit logger when b.cc is being reset (i.e., cc is nil).

However, it's fine to start the audit logger if b.cc implements auditlog.Transport, even if it's not a controlclient.Auto but a mock control client.

In this PR, we fix both issues and add an assertion that controlclient.Auto is an auditlog.Transport. This ensures a compile-time failure if controlclient.Auto ever stops being a valid transport due to future interface or implementation changes.

Updates tailscale/corp#26435

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2025-03-20 15:56:54 -05:00
Nick Khyl
e07c1573f6 ipn/ipnlocal: do not reset the netmap and packet filter in (*LocalBackend).Start()
Resetting LocalBackend's netmap without also unconfiguring wgengine to reset routes, DNS, and the killswitch
firewall rules may cause connectivity issues until a new netmap is received.

In some cases, such as when bootstrap DNS servers are inaccessible due to network restrictions or other reasons,
or if the control plane is experiencing issues, this can result in a complete loss of connectivity until the user disconnects
and reconnects to Tailscale.

As LocalBackend handles state resets in (*LocalBackend).resetForProfileChangeLockedOnEntry(), and this includes
resetting the netmap, resetting the current netmap in (*LocalBackend).Start() is not necessary.
Moreover, it's harmful if (*LocalBackend).Start() is called more than once for the same profile.

In this PR, we update resetForProfileChangeLockedOnEntry() to reset the packet filter and remove
the redundant resetting of the netmap and packet filter from Start(). We also update the state machine
tests and revise comments that became inaccurate due to previous test updates.

Updates tailscale/corp#27173

Signed-off-by: Nick Khyl <nickk@tailscale.com>
2025-03-20 13:18:23 -05:00
Brad Fitzpatrick
984cd1cab0 cmd/tailscale: add CLI debug command to do raw LocalAPI requests
This adds a portable way to do a raw LocalAPI request without worrying
about the Unix-vs-macOS-vs-Windows ways of hitting the LocalAPI server.
(It was already possible but tedious with 'tailscale debug local-creds')

Updates tailscale/corp#24690

Change-Id: I0828ca55edaedf0565c8db192c10f24bebb95f1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2025-03-20 10:07:11 -07:00
Irbe Krumina
f34e08e186 ipn: ensure that conffile is source of truth for advertised services. (#15361)
If conffile is used to configure tailscaled, always update
currently advertised services from conffile, even if they
are empty in the conffile, to ensure that it is possible
to transition to a state where no services are advertised.

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-20 14:40:36 +00:00
klyubin
3a2c92f08e web: support Host 100.100.100.100:80 in tailscaled web server
This makes the web server running inside tailscaled on 100.100.100.100:80 support requests with `Host: 100.100.100.100:80` and its IPv6 equivalent.

Prior to this commit, the web server replied to such requests with a redirect to the node's Tailscale IP:5252.

Fixes https://github.com/tailscale/tailscale/issues/14415

Signed-off-by: Alex Klyubin <klyubin@gmail.com>
2025-03-19 16:46:32 +00:00
Tom Proctor
8d84720edb cmd/k8s-operator: update ProxyGroup config Secrets instead of patch (#15353)
There was a flaky failure case where renaming a TLS hostname for an
ingress might leave the old hostname dangling in tailscaled config. This
happened when the proxygroup reconciler loop had an outdated resource
version of the config Secret in its cache after the
ingress-pg-reconciler loop had very recently written it to delete the
old hostname. As the proxygroup reconciler then did a patch, there was
no conflict and it reinstated the old hostname.

This commit updates the patch to an update operation so that if the
resource version is out of date it will fail with an optimistic lock
error. It also checks for equality to reduce the likelihood that we make
the update API call in the first place, because most of the time the
proxygroup reconciler is not even making an update to the Secret in the
case that the hostname has changed.

Updates tailscale/corp#24795

Change-Id: Ie23a97440063976c9a8475d24ab18253e1f89050
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2025-03-19 13:49:36 +00:00
Jonathan Nobels
25d5f78c6e net/dns: expose a function for recompiling the DNS configuration (#15346)
updates tailscale/corp#27145

We require a means to trigger a recompilation of the DNS configuration
to pick up new nameservers for platforms where we blend the interface
nameservers from the OS into our DNS config.

Notably, on Darwin, the only API we have at our disposal will, in rare instances,
return a transient error when querying the interface nameservers on a link change if
they have not been set when we get the AF_ROUTE messages for the link
update.

There's a corresponding change in corp for Darwin clients, to track
the interface namservers during NEPathMonitor events, and call this
when the nameservers change.

This will also fix the slightly more obscure bug of changing nameservers
 while tailscaled is running.  That change can now be reflected in
magicDNS without having to stop the client.

Signed-off-by: Jonathan Nobels <jonathan@tailscale.com>
2025-03-19 09:21:37 -04:00
Irbe Krumina
f50d3b22db cmd/k8s-operator: configure proxies for HA Ingress to run in cert share mode (#15308)
cmd/k8s-operator: configure HA Ingress replicas to share certs

Creates TLS certs Secret and RBAC that allows HA Ingress replicas
to read/write to the Secret.
Configures HA Ingress replicas to run in read-only mode.

Updates tailscale/corp#24795


Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-19 12:49:31 +00:00
Tom Proctor
b0095a5da4 cmd/k8s-operator: wait for VIPService before updating HA Ingress status (#15343)
Update the HA Ingress controller to wait until it sees AdvertisedServices
config propagated into at least 1 Pod's prefs before it updates the status
on the Ingress, to ensure the ProxyGroup Pods are ready to serve traffic
before indicating that the Ingress is ready

Updates tailscale/corp#24795

Change-Id: I1b8ce23c9e312d08f9d02e48d70bdebd9e1a4757

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2025-03-19 08:53:15 +00:00
David Anderson
e091e71937 util/eventbus: remove debug UI from iOS build
The use of html/template causes reflect-based linker bloat. Longer
term we have options to bring the UI back to iOS, but for now, cut
it out.

Updates #15297

Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-18 17:04:15 -07:00
David Anderson
daa5635ba6 tsweb: split promvarz into an optional dependency
Allows the use of tsweb without pulling in all of the heavy prometheus
client libraries, protobuf and so on.

Updates #15160

Signed-off-by: David Anderson <dave@tailscale.com>
2025-03-18 16:57:04 -07:00
Anton Tolchanov
74ee749386 client/tailscale: add tailnet lock fields to Device struct
These are documented, but have not yet been defined in the client.
https://tailscale.com/api#tag/devices/GET/device/{deviceId}

Updates tailscale/corp#27050

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2025-03-18 17:03:19 +00:00
Irbe Krumina
34734ba635 ipn/store/kubestore,kube,envknob,cmd/tailscaled/depaware.txt: allow kubestore read/write custom TLS secrets (#15307)
This PR adds some custom logic for reading and writing
kube store values that are TLS certs and keys:
1) when store is initialized, lookup additional
TLS Secrets for this node and if found, load TLS certs
from there
2) if the node runs in certs 'read only' mode and
TLS cert and key are not found in the in-memory store,
look those up in a Secret
3) if the node runs in certs 'read only' mode, run
a daily TLS certs reload to memory to get any
renewed certs

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-18 15:09:22 +00:00
Tom Proctor
ef1e14250c cmd/k8s-operator: ensure old VIPServices are cleaned up (#15344)
When the Ingress is updated to a new hostname, the controller does not
currently clean up the old VIPService from control. Fix this up to parse
the ownership comment correctly and write a test to enforce the improved
behaviour

Updates tailscale/corp#24795

Change-Id: I792ae7684807d254bf2d3cc7aa54aa04a582d1f5

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
2025-03-18 12:48:59 +00:00
Anton Tolchanov
b413b70ae2 cmd/proxy-to-grafana: support setting Grafana role via grants
This adds support for using ACL Grants to configure a role for the
auto-provisioned user.

Fixes tailscale/corp#14567

Signed-off-by: Anton Tolchanov <anton@tailscale.com>
2025-03-18 07:26:04 +00:00
License Updater
25b059c0ee licenses: update license notices
Signed-off-by: License Updater <noreply+license-updater@tailscale.com>
2025-03-17 12:50:16 -07:00
James Sanderson
27ef9b666c ipn/ipnlocal: add test for CapMap packet filters
Updates tailscale/corp#20514

Signed-off-by: James Sanderson <jsanderson@tailscale.com>
2025-03-17 11:24:54 +00:00
Andrew Lytvynov
3a4b622276 .github/workflows/govulncheck.yml: send messages to another channel (#15295)
Updates #cleanup

Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
2025-03-14 12:30:29 -07:00
Irbe Krumina
299c5372bd cmd/containerboot: manage HA Ingress TLS certs from containerboot (#15303)
cmd/containerboot: manage HA Ingress TLS certs from containerboot

When ran as HA Ingress node, containerboot now can determine
whether it should manage TLS certs for the HA Ingress replicas
and call the LocalAPI cert endpoint to ensure initial issuance
and renewal of the shared TLS certs.

Updates tailscale/corp#24795

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
2025-03-14 17:33:08 +00:00
Jordan Whited
8b1e7f646e net/packet: implement Geneve header serialization (#15301)
Updates tailscale/corp#27100

Signed-off-by: Jordan Whited <jordan@tailscale.com>
2025-03-13 13:33:26 -07:00
141 changed files with 4780 additions and 983 deletions

View File

@@ -30,7 +30,7 @@ jobs:
token: ${{ secrets.GOVULNCHECK_BOT_TOKEN }}
payload: |
{
"channel": "C05PXRM304B",
"channel": "C08FGKZCQTW",
"blocks": [
{
"type": "section",

View File

@@ -1 +1 @@
3.18
3.19

View File

@@ -62,8 +62,10 @@ RUN GOARCH=$TARGETARCH go install -ldflags="\
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.18
FROM alpine:3.19
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables
RUN rm /sbin/iptables && ln -s /sbin/iptables-legacy /sbin/iptables
RUN rm /sbin/ip6tables && ln -s /sbin/ip6tables-legacy /sbin/ip6tables
COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be

View File

@@ -1,5 +1,12 @@
# Copyright (c) Tailscale Inc & AUTHORS
# SPDX-License-Identifier: BSD-3-Clause
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables iputils
FROM alpine:3.19
RUN apk add --no-cache ca-certificates iptables iptables-legacy iproute2 ip6tables iputils
# Alpine 3.19 replaces legacy iptables with nftables based implementation. We
# can't be certain that all hosts that run Tailscale containers currently
# suppport nftables, so link back to legacy for backwards compatibility reasons.
# TODO(irbekrm): add some way how to determine if we still run on nodes that
# don't support nftables, so that we can eventually remove these symlinks.
RUN rm /sbin/iptables && ln -s /sbin/iptables-legacy /sbin/iptables
RUN rm /sbin/ip6tables && ln -s /sbin/ip6tables-legacy /sbin/ip6tables

View File

@@ -16,7 +16,7 @@ eval "$(./build_dist.sh shellvars)"
DEFAULT_TARGET="client"
DEFAULT_TAGS="v${VERSION_SHORT},v${VERSION_MINOR}"
DEFAULT_BASE="tailscale/alpine-base:3.18"
DEFAULT_BASE="tailscale/alpine-base:3.19"
# Set a few pre-defined OCI annotations. The source annotation is used by tools such as Renovate that scan the linked
# Github repo to find release notes for any new image tags. Note that for official Tailscale images the default
# annotations defined here will be overriden by release scripts that call this script.

View File

@@ -79,6 +79,13 @@ type Device struct {
// Tailscale have attempted to collect this from the device but it has not
// opted in, PostureIdentity will have Disabled=true.
PostureIdentity *DevicePostureIdentity `json:"postureIdentity"`
// TailnetLockKey is the tailnet lock public key of the node as a hex string.
TailnetLockKey string `json:"tailnetLockKey,omitempty"`
// TailnetLockErr indicates an issue with the tailnet lock node-key signature
// on this device. This field is only populated when tailnet lock is enabled.
TailnetLockErr string `json:"tailnetLockError,omitempty"`
}
type DevicePostureIdentity struct {

View File

@@ -335,7 +335,8 @@ func (s *Server) requireTailscaleIP(w http.ResponseWriter, r *http.Request) (han
ipv6ServiceHost = "[" + tsaddr.TailscaleServiceIPv6String + "]"
)
// allow requests on quad-100 (or ipv6 equivalent)
if r.Host == ipv4ServiceHost || r.Host == ipv6ServiceHost {
host := strings.TrimSuffix(r.Host, ":80")
if host == ipv4ServiceHost || host == ipv6ServiceHost {
return false
}

View File

@@ -1177,6 +1177,16 @@ func TestRequireTailscaleIP(t *testing.T) {
target: "http://[fd7a:115c:a1e0::53]/",
wantHandled: false,
},
{
name: "quad-100:80",
target: "http://100.100.100.100:80/",
wantHandled: false,
},
{
name: "ipv6-service-addr:80",
target: "http://[fd7a:115c:a1e0::53]:80/",
wantHandled: false,
},
}
for _, tt := range tests {

View File

@@ -28,6 +28,7 @@ import (
"strings"
"tailscale.com/hostinfo"
"tailscale.com/types/lazy"
"tailscale.com/types/logger"
"tailscale.com/util/cmpver"
"tailscale.com/version"
@@ -249,9 +250,13 @@ func (up *Updater) getUpdateFunction() (fn updateFunction, canAutoUpdate bool) {
return nil, false
}
var canAutoUpdateCache lazy.SyncValue[bool]
// CanAutoUpdate reports whether auto-updating via the clientupdate package
// is supported for the current os/distro.
func CanAutoUpdate() bool {
func CanAutoUpdate() bool { return canAutoUpdateCache.Get(canAutoUpdateUncached) }
func canAutoUpdateUncached() bool {
if version.IsMacSysExt() {
// Macsys uses Sparkle for auto-updates, which doesn't have an update
// function in this package.

156
cmd/containerboot/certs.go Normal file
View File

@@ -0,0 +1,156 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"fmt"
"log"
"net"
"sync"
"time"
"tailscale.com/ipn"
"tailscale.com/util/goroutines"
"tailscale.com/util/mak"
)
// certManager is responsible for issuing certificates for known domains and for
// maintaining a loop that re-attempts issuance daily.
// Currently cert manager logic is only run on ingress ProxyGroup replicas that are responsible for managing certs for
// HA Ingress HTTPS endpoints ('write' replicas).
type certManager struct {
lc localClient
tracker goroutines.Tracker // tracks running goroutines
mu sync.Mutex // guards the following
// certLoops contains a map of DNS names, for which we currently need to
// manage certs to cancel functions that allow stopping a goroutine when
// we no longer need to manage certs for the DNS name.
certLoops map[string]context.CancelFunc
}
// ensureCertLoops ensures that, for all currently managed Service HTTPS
// endpoints, there is a cert loop responsible for issuing and ensuring the
// renewal of the TLS certs.
// ServeConfig must not be nil.
func (cm *certManager) ensureCertLoops(ctx context.Context, sc *ipn.ServeConfig) error {
if sc == nil {
return fmt.Errorf("[unexpected] ensureCertLoops called with nil ServeConfig")
}
currentDomains := make(map[string]bool)
const httpsPort = "443"
for _, service := range sc.Services {
for hostPort := range service.Web {
domain, port, err := net.SplitHostPort(string(hostPort))
if err != nil {
return fmt.Errorf("[unexpected] unable to parse HostPort %s", hostPort)
}
if port != httpsPort { // HA Ingress' HTTP endpoint
continue
}
currentDomains[domain] = true
}
}
cm.mu.Lock()
defer cm.mu.Unlock()
for domain := range currentDomains {
if _, exists := cm.certLoops[domain]; !exists {
cancelCtx, cancel := context.WithCancel(ctx)
mak.Set(&cm.certLoops, domain, cancel)
// Note that most of the issuance anyway happens
// serially because the cert client has a shared lock
// that's held during any issuance.
cm.tracker.Go(func() { cm.runCertLoop(cancelCtx, domain) })
}
}
// Stop goroutines for domain names that are no longer in the config.
for domain, cancel := range cm.certLoops {
if !currentDomains[domain] {
cancel()
delete(cm.certLoops, domain)
}
}
return nil
}
// runCertLoop:
// - calls localAPI certificate endpoint to ensure that certs are issued for the
// given domain name
// - calls localAPI certificate endpoint daily to ensure that certs are renewed
// - if certificate issuance failed retries after an exponential backoff period
// starting at 1 minute and capped at 24 hours. Reset the backoff once issuance succeeds.
// Note that renewal check also happens when the node receives an HTTPS request and it is possible that certs get
// renewed at that point. Renewal here is needed to prevent the shared certs from expiry in edge cases where the 'write'
// replica does not get any HTTPS requests.
// https://letsencrypt.org/docs/integration-guide/#retrying-failures
func (cm *certManager) runCertLoop(ctx context.Context, domain string) {
const (
normalInterval = 24 * time.Hour // regular renewal check
initialRetry = 1 * time.Minute // initial backoff after a failure
maxRetryInterval = 24 * time.Hour // max backoff period
)
timer := time.NewTimer(0) // fire off timer immediately
defer timer.Stop()
retryCount := 0
for {
select {
case <-ctx.Done():
return
case <-timer.C:
// We call the certificate endpoint, but don't do anything
// with the returned certs here.
// The call to the certificate endpoint will ensure that
// certs are issued/renewed as needed and stored in the
// relevant state store. For example, for HA Ingress
// 'write' replica, the cert and key will be stored in a
// Kubernetes Secret named after the domain for which we
// are issuing.
// Note that renewals triggered by the call to the
// certificates endpoint here and by renewal check
// triggered during a call to node's HTTPS endpoint
// share the same state/renewal lock mechanism, so we
// should not run into redundant issuances during
// concurrent renewal checks.
// TODO(irbekrm): maybe it is worth adding a new
// issuance endpoint that explicitly only triggers
// issuance and stores certs in the relevant store, but
// does not return certs to the caller?
// An issuance holds a shared lock, so we need to avoid
// a situation where other services cannot issue certs
// because a single one is holding the lock.
ctxT, cancel := context.WithTimeout(ctx, time.Second*300)
defer cancel()
_, _, err := cm.lc.CertPair(ctxT, domain)
if err != nil {
log.Printf("error refreshing certificate for %s: %v", domain, err)
}
var nextInterval time.Duration
// TODO(irbekrm): distinguish between LE rate limit
// errors and other error types like transient network
// errors.
if err == nil {
retryCount = 0
nextInterval = normalInterval
} else {
retryCount++
// Calculate backoff: initialRetry * 2^(retryCount-1)
// For retryCount=1: 1min * 2^0 = 1min
// For retryCount=2: 1min * 2^1 = 2min
// For retryCount=3: 1min * 2^2 = 4min
backoff := initialRetry * time.Duration(1<<(retryCount-1))
if backoff > maxRetryInterval {
backoff = maxRetryInterval
}
nextInterval = backoff
log.Printf("Error refreshing certificate for %s (retry %d): %v. Will retry in %v\n",
domain, retryCount, err, nextInterval)
}
timer.Reset(nextInterval)
}
}
}

View File

@@ -0,0 +1,229 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
//go:build linux
package main
import (
"context"
"testing"
"time"
"tailscale.com/ipn"
"tailscale.com/tailcfg"
)
// TestEnsureCertLoops tests that the certManager correctly starts and stops
// update loops for certs when the serve config changes. It tracks goroutine
// count and uses that as a validator that the expected number of cert loops are
// running.
func TestEnsureCertLoops(t *testing.T) {
tests := []struct {
name string
initialConfig *ipn.ServeConfig
updatedConfig *ipn.ServeConfig
initialGoroutines int64 // after initial serve config is applied
updatedGoroutines int64 // after updated serve config is applied
wantErr bool
}{
{
name: "empty_serve_config",
initialConfig: &ipn.ServeConfig{},
initialGoroutines: 0,
},
{
name: "nil_serve_config",
initialConfig: nil,
initialGoroutines: 0,
wantErr: true,
},
{
name: "empty_to_one_service",
initialConfig: &ipn.ServeConfig{},
updatedConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
},
},
initialGoroutines: 0,
updatedGoroutines: 1,
},
{
name: "single_service",
initialConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
},
},
initialGoroutines: 1,
},
{
name: "multiple_services",
initialConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
"svc:my-other-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-other-app.tailnetxyz.ts.net:443": {},
},
},
},
},
initialGoroutines: 2, // one loop per domain across all services
},
{
name: "ignore_non_https_ports",
initialConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
"my-app.tailnetxyz.ts.net:80": {},
},
},
},
},
initialGoroutines: 1, // only one loop for the 443 endpoint
},
{
name: "remove_domain",
initialConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
"svc:my-other-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-other-app.tailnetxyz.ts.net:443": {},
},
},
},
},
updatedConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
},
},
initialGoroutines: 2, // initially two loops (one per service)
updatedGoroutines: 1, // one loop after removing service2
},
{
name: "add_domain",
initialConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
},
},
updatedConfig: &ipn.ServeConfig{
Services: map[tailcfg.ServiceName]*ipn.ServiceConfig{
"svc:my-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-app.tailnetxyz.ts.net:443": {},
},
},
"svc:my-other-app": {
Web: map[ipn.HostPort]*ipn.WebServerConfig{
"my-other-app.tailnetxyz.ts.net:443": {},
},
},
},
},
initialGoroutines: 1,
updatedGoroutines: 2,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
cm := &certManager{
lc: &fakeLocalClient{},
certLoops: make(map[string]context.CancelFunc),
}
allDone := make(chan bool, 1)
defer cm.tracker.AddDoneCallback(func() {
cm.mu.Lock()
defer cm.mu.Unlock()
if cm.tracker.RunningGoroutines() > 0 {
return
}
select {
case allDone <- true:
default:
}
})()
err := cm.ensureCertLoops(ctx, tt.initialConfig)
if (err != nil) != tt.wantErr {
t.Fatalf("ensureCertLoops() error = %v", err)
}
if got := cm.tracker.RunningGoroutines(); got != tt.initialGoroutines {
t.Errorf("after initial config: got %d running goroutines, want %d", got, tt.initialGoroutines)
}
if tt.updatedConfig != nil {
if err := cm.ensureCertLoops(ctx, tt.updatedConfig); err != nil {
t.Fatalf("ensureCertLoops() error on update = %v", err)
}
// Although starting goroutines and cancelling
// the context happens in the main goroutine, it
// the actual goroutine exit when a context is
// cancelled does not- so wait for a bit for the
// running goroutine count to reach the expected
// number.
deadline := time.After(5 * time.Second)
for {
if got := cm.tracker.RunningGoroutines(); got == tt.updatedGoroutines {
break
}
select {
case <-deadline:
t.Fatalf("timed out waiting for goroutine count to reach %d, currently at %d",
tt.updatedGoroutines, cm.tracker.RunningGoroutines())
case <-time.After(10 * time.Millisecond):
continue
}
}
}
if tt.updatedGoroutines == 0 {
return // no goroutines to wait for
}
// cancel context to make goroutines exit
cancel()
select {
case <-time.After(5 * time.Second):
t.Fatal("timed out waiting for goroutine to finish")
case <-allDone:
}
})
}
}

View File

@@ -646,7 +646,7 @@ runLoop:
if cfg.ServeConfigPath != "" {
triggerWatchServeConfigChanges.Do(func() {
go watchServeConfigChanges(ctx, cfg.ServeConfigPath, certDomainChanged, certDomain, client, kc)
go watchServeConfigChanges(ctx, certDomainChanged, certDomain, client, kc, cfg)
})
}

View File

@@ -28,10 +28,11 @@ import (
// applies it to lc. It exits when ctx is canceled. cdChanged is a channel that
// is written to when the certDomain changes, causing the serve config to be
// re-read and applied.
func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan bool, certDomainAtomic *atomic.Pointer[string], lc *local.Client, kc *kubeClient) {
func watchServeConfigChanges(ctx context.Context, cdChanged <-chan bool, certDomainAtomic *atomic.Pointer[string], lc *local.Client, kc *kubeClient, cfg *settings) {
if certDomainAtomic == nil {
panic("certDomainAtomic must not be nil")
}
var tickChan <-chan time.Time
var eventChan <-chan fsnotify.Event
if w, err := fsnotify.NewWatcher(); err != nil {
@@ -43,7 +44,7 @@ func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan
tickChan = ticker.C
} else {
defer w.Close()
if err := w.Add(filepath.Dir(path)); err != nil {
if err := w.Add(filepath.Dir(cfg.ServeConfigPath)); err != nil {
log.Fatalf("serve proxy: failed to add fsnotify watch: %v", err)
}
eventChan = w.Events
@@ -51,6 +52,12 @@ func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan
var certDomain string
var prevServeConfig *ipn.ServeConfig
var cm certManager
if cfg.CertShareMode == "rw" {
cm = certManager{
lc: lc,
}
}
for {
select {
case <-ctx.Done():
@@ -63,12 +70,12 @@ func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan
// k8s handles these mounts. So just re-read the file and apply it
// if it's changed.
}
sc, err := readServeConfig(path, certDomain)
sc, err := readServeConfig(cfg.ServeConfigPath, certDomain)
if err != nil {
log.Fatalf("serve proxy: failed to read serve config: %v", err)
}
if sc == nil {
log.Printf("serve proxy: no serve config at %q, skipping", path)
log.Printf("serve proxy: no serve config at %q, skipping", cfg.ServeConfigPath)
continue
}
if prevServeConfig != nil && reflect.DeepEqual(sc, prevServeConfig) {
@@ -83,6 +90,12 @@ func watchServeConfigChanges(ctx context.Context, path string, cdChanged <-chan
}
}
prevServeConfig = sc
if cfg.CertShareMode != "rw" {
continue
}
if err := cm.ensureCertLoops(ctx, sc); err != nil {
log.Fatalf("serve proxy: error ensuring cert loops: %v", err)
}
}
}
@@ -96,6 +109,7 @@ func certDomainFromNetmap(nm *netmap.NetworkMap) string {
// localClient is a subset of [local.Client] that can be mocked for testing.
type localClient interface {
SetServeConfig(context.Context, *ipn.ServeConfig) error
CertPair(context.Context, string) ([]byte, []byte, error)
}
func updateServeConfig(ctx context.Context, sc *ipn.ServeConfig, certDomain string, lc localClient) error {

View File

@@ -206,6 +206,10 @@ func (m *fakeLocalClient) SetServeConfig(ctx context.Context, cfg *ipn.ServeConf
return nil
}
func (m *fakeLocalClient) CertPair(ctx context.Context, domain string) (certPEM, keyPEM []byte, err error) {
return nil, nil, nil
}
func TestHasHTTPSEndpoint(t *testing.T) {
tests := []struct {
name string

View File

@@ -74,6 +74,12 @@ type settings struct {
HealthCheckEnabled bool
DebugAddrPort string
EgressProxiesCfgPath string
// CertShareMode is set for Kubernetes Pods running cert share mode.
// Possible values are empty (containerboot doesn't run any certs
// logic), 'ro' (for Pods that shold never attempt to issue/renew
// certs) and 'rw' for Pods that should manage the TLS certs shared
// amongst the replicas.
CertShareMode string
}
func configFromEnv() (*settings, error) {
@@ -128,6 +134,17 @@ func configFromEnv() (*settings, error) {
cfg.PodIPv6 = parsed.String()
}
}
// If cert share is enabled, set the replica as read or write. Only 0th
// replica should be able to write.
isInCertShareMode := defaultBool("TS_EXPERIMENTAL_CERT_SHARE", false)
if isInCertShareMode {
cfg.CertShareMode = "ro"
podName := os.Getenv("POD_NAME")
if strings.HasSuffix(podName, "-0") {
cfg.CertShareMode = "rw"
}
}
if err := cfg.validate(); err != nil {
return nil, fmt.Errorf("invalid configuration: %v", err)
}

View File

@@ -33,6 +33,9 @@ func startTailscaled(ctx context.Context, cfg *settings) (*local.Client, *os.Pro
cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true,
}
if cfg.CertShareMode != "" {
cmd.Env = append(os.Environ(), "TS_CERT_SHARE_MODE="+cfg.CertShareMode)
}
log.Printf("Starting tailscaled")
if err := cmd.Start(); err != nil {
return nil, nil, fmt.Errorf("starting tailscaled failed: %v", err)

View File

@@ -96,6 +96,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/disco from tailscale.com/derp
tailscale.com/drive from tailscale.com/client/local+
tailscale.com/envknob from tailscale.com/client/local+
tailscale.com/feature from tailscale.com/tsweb
tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/hostinfo from tailscale.com/net/netmon+
tailscale.com/ipn from tailscale.com/client/local
@@ -128,8 +129,8 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
tailscale.com/tstime from tailscale.com/derp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/derp
tailscale.com/tsweb from tailscale.com/cmd/derper
tailscale.com/tsweb/promvarz from tailscale.com/tsweb
tailscale.com/tsweb from tailscale.com/cmd/derper+
tailscale.com/tsweb/promvarz from tailscale.com/cmd/derper
tailscale.com/tsweb/varz from tailscale.com/tsweb+
tailscale.com/types/dnstype from tailscale.com/tailcfg+
tailscale.com/types/empty from tailscale.com/ipn
@@ -154,6 +155,7 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
💣 tailscale.com/util/deephash from tailscale.com/util/syspolicy/setting
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/hostinfo+
tailscale.com/util/eventbus from tailscale.com/net/netmon
💣 tailscale.com/util/hashx from tailscale.com/util/deephash
tailscale.com/util/httpm from tailscale.com/client/tailscale
tailscale.com/util/lineiter from tailscale.com/hostinfo+
@@ -307,9 +309,9 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
hash/fnv from google.golang.org/protobuf/internal/detrand
hash/maphash from go4.org/mem
html from net/http/pprof+
html/template from tailscale.com/cmd/derper
html/template from tailscale.com/cmd/derper+
internal/abi from crypto/x509/internal/macos+
internal/asan from syscall+
internal/asan from internal/runtime/maps+
internal/bisect from internal/godebug
internal/bytealg from bytes+
internal/byteorder from crypto/cipher+
@@ -319,12 +321,12 @@ tailscale.com/cmd/derper dependencies: (generated by github.com/tailscale/depawa
internal/filepathlite from os+
internal/fmtsort from fmt+
internal/goarch from crypto/internal/fips140deps/cpu+
internal/godebug from crypto/tls+
internal/godebug from crypto/internal/fips140deps/godebug+
internal/godebugs from internal/godebug+
internal/goexperiment from runtime+
internal/goexperiment from hash/maphash+
internal/goos from crypto/x509+
internal/itoa from internal/poll+
internal/msan from syscall+
internal/msan from internal/runtime/maps+
internal/nettrace from net+
internal/oserror from io/fs+
internal/poll from net+

View File

@@ -49,6 +49,9 @@ import (
"tailscale.com/types/key"
"tailscale.com/types/logger"
"tailscale.com/version"
// Support for prometheus varz in tsweb
_ "tailscale.com/tsweb/promvarz"
)
var (

View File

@@ -15,6 +15,9 @@ import (
"tailscale.com/prober"
"tailscale.com/tsweb"
"tailscale.com/version"
// Support for prometheus varz in tsweb
_ "tailscale.com/tsweb/promvarz"
)
var (

View File

@@ -82,6 +82,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
L github.com/aws/smithy-go/waiter from github.com/aws/aws-sdk-go-v2/service/ssm
github.com/beorn7/perks/quantile from github.com/prometheus/client_golang/prometheus
💣 github.com/cespare/xxhash/v2 from github.com/prometheus/client_golang/prometheus
github.com/coder/websocket from tailscale.com/util/eventbus
github.com/coder/websocket/internal/errd from github.com/coder/websocket
github.com/coder/websocket/internal/util from github.com/coder/websocket
github.com/coder/websocket/internal/xsync from github.com/coder/websocket
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
💣 github.com/davecgh/go-spew/spew from k8s.io/apimachinery/pkg/util/dump
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/com+
@@ -903,7 +907,8 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
tailscale.com/tstime from tailscale.com/cmd/k8s-operator+
tailscale.com/tstime/mono from tailscale.com/net/tstun+
tailscale.com/tstime/rate from tailscale.com/derp+
tailscale.com/tsweb/varz from tailscale.com/util/usermetric
tailscale.com/tsweb from tailscale.com/util/eventbus
tailscale.com/tsweb/varz from tailscale.com/util/usermetric+
tailscale.com/types/appctype from tailscale.com/ipn/ipnlocal
tailscale.com/types/bools from tailscale.com/tsnet
tailscale.com/types/dnstype from tailscale.com/ipn/ipnlocal+
@@ -932,6 +937,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
💣 tailscale.com/util/deephash from tailscale.com/ipn/ipnlocal+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics+
tailscale.com/util/dnsname from tailscale.com/appc+
tailscale.com/util/eventbus from tailscale.com/tsd+
tailscale.com/util/execqueue from tailscale.com/appc+
tailscale.com/util/goroutines from tailscale.com/ipn/ipnlocal
tailscale.com/util/groupmember from tailscale.com/client/web+
@@ -1149,9 +1155,9 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
hash/fnv from google.golang.org/protobuf/internal/detrand
hash/maphash from go4.org/mem
html from html/template+
html/template from github.com/gorilla/csrf
html/template from github.com/gorilla/csrf+
internal/abi from crypto/x509/internal/macos+
internal/asan from syscall+
internal/asan from internal/runtime/maps+
internal/bisect from internal/godebug
internal/bytealg from bytes+
internal/byteorder from crypto/cipher+
@@ -1163,11 +1169,11 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
internal/goarch from crypto/internal/fips140deps/cpu+
internal/godebug from archive/tar+
internal/godebugs from internal/godebug+
internal/goexperiment from runtime+
internal/goexperiment from hash/maphash+
internal/goos from crypto/x509+
internal/itoa from internal/poll+
internal/lazyregexp from go/doc
internal/msan from syscall+
internal/msan from internal/runtime/maps+
internal/nettrace from net+
internal/oserror from io/fs+
internal/poll from net+

View File

@@ -75,7 +75,7 @@ rules:
verbs: ["get", "list", "watch", "create", "update", "deletecollection"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "create", "patch", "update", "list", "watch"]
verbs: ["get", "create", "patch", "update", "list", "watch", "deletecollection"]
- apiGroups: ["monitoring.coreos.com"]
resources: ["servicemonitors"]
verbs: ["get", "list", "update", "create", "delete"]

View File

@@ -2215,6 +2215,22 @@ spec:
https://tailscale.com/kb/1019/subnets#use-your-subnet-routes-from-other-devices
Defaults to false.
type: boolean
useLetsEncryptStagingEnvironment:
description: |-
Set UseLetsEncryptStagingEnvironment to true to issue TLS
certificates for any HTTPS endpoints exposed to the tailnet from
LetsEncrypt's staging environment.
https://letsencrypt.org/docs/staging-environment/
This setting only affects Tailscale Ingress resources.
By default Ingress TLS certificates are issued from LetsEncrypt's
production environment.
Changing this setting true -> false, will result in any
existing certs being re-issued from the production environment.
Changing this setting false (default) -> true, when certs have already
been provisioned from production environment will NOT result in certs
being re-issued from the staging environment before they need to be
renewed.
type: boolean
status:
description: |-
Status of the ProxyClass. This is set and managed automatically.

View File

@@ -103,7 +103,7 @@ spec:
pattern: ^tag:[a-zA-Z][a-zA-Z0-9-]*$
type:
description: |-
Type of the ProxyGroup proxies. Supported types are egress and ingress.
Type of the ProxyGroup proxies. Currently the only supported type is egress.
Type is immutable once a ProxyGroup is created.
type: string
enum:

View File

@@ -2685,6 +2685,22 @@ spec:
Defaults to false.
type: boolean
type: object
useLetsEncryptStagingEnvironment:
description: |-
Set UseLetsEncryptStagingEnvironment to true to issue TLS
certificates for any HTTPS endpoints exposed to the tailnet from
LetsEncrypt's staging environment.
https://letsencrypt.org/docs/staging-environment/
This setting only affects Tailscale Ingress resources.
By default Ingress TLS certificates are issued from LetsEncrypt's
production environment.
Changing this setting true -> false, will result in any
existing certs being re-issued from the production environment.
Changing this setting false (default) -> true, when certs have already
been provisioned from production environment will NOT result in certs
being re-issued from the staging environment before they need to be
renewed.
type: boolean
type: object
status:
description: |-
@@ -2860,7 +2876,7 @@ spec:
type: array
type:
description: |-
Type of the ProxyGroup proxies. Supported types are egress and ingress.
Type of the ProxyGroup proxies. Currently the only supported type is egress.
Type is immutable once a ProxyGroup is created.
enum:
- egress
@@ -4898,6 +4914,7 @@ rules:
- update
- list
- watch
- deletecollection
- apiGroups:
- monitoring.coreos.com
resources:

View File

@@ -22,6 +22,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client/fake"
operatorutils "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
)
@@ -163,10 +164,10 @@ func headlessSvcForParent(o client.Object, typ string) *corev1.Service {
Name: o.GetName(),
Namespace: "tailscale",
Labels: map[string]string{
LabelManaged: "true",
LabelParentName: o.GetName(),
LabelParentNamespace: o.GetNamespace(),
LabelParentType: typ,
kubetypes.LabelManaged: "true",
LabelParentName: o.GetName(),
LabelParentNamespace: o.GetNamespace(),
LabelParentType: typ,
},
},
Spec: corev1.ServiceSpec{

View File

@@ -112,9 +112,9 @@ func (er *egressPodsReconciler) Reconcile(ctx context.Context, req reconcile.Req
}
// Get all ClusterIP Services for all egress targets exposed to cluster via this ProxyGroup.
lbls := map[string]string{
LabelManaged: "true",
labelProxyGroup: proxyGroupName,
labelSvcType: typeEgress,
kubetypes.LabelManaged: "true",
labelProxyGroup: proxyGroupName,
labelSvcType: typeEgress,
}
svcs := &corev1.ServiceList{}
if err := er.List(ctx, svcs, client.InNamespace(er.tsNamespace), client.MatchingLabels(lbls)); err != nil {

View File

@@ -450,9 +450,9 @@ func newSvc(name string, port int32) (*corev1.Service, string) {
Namespace: "operator-ns",
Name: name,
Labels: map[string]string{
LabelManaged: "true",
labelProxyGroup: "dev",
labelSvcType: typeEgress,
kubetypes.LabelManaged: "true",
labelProxyGroup: "dev",
labelSvcType: typeEgress,
},
},
Spec: corev1.ServiceSpec{},

View File

@@ -680,12 +680,12 @@ func egressSvcsConfigs(ctx context.Context, cl client.Client, proxyGroupName, ts
// should probably validate and truncate (?) the names is they are too long.
func egressSvcChildResourceLabels(svc *corev1.Service) map[string]string {
return map[string]string{
LabelManaged: "true",
LabelParentType: "svc",
LabelParentName: svc.Name,
LabelParentNamespace: svc.Namespace,
labelProxyGroup: svc.Annotations[AnnotationProxyGroup],
labelSvcType: typeEgress,
kubetypes.LabelManaged: "true",
LabelParentType: "svc",
LabelParentName: svc.Name,
LabelParentNamespace: svc.Namespace,
labelProxyGroup: svc.Annotations[AnnotationProxyGroup],
labelSvcType: typeEgress,
}
}

View File

@@ -22,6 +22,7 @@ import (
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
rbacv1 "k8s.io/api/rbac/v1"
apiequality "k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -48,10 +49,11 @@ const (
// FinalizerNamePG is the finalizer used by the IngressPGReconciler
FinalizerNamePG = "tailscale.com/ingress-pg-finalizer"
indexIngressProxyGroup = ".metadata.annotations.ingress-proxy-group"
// annotationHTTPEndpoint can be used to configure the Ingress to expose an HTTP endpoint to tailnet (as
// well as the default HTTPS endpoint).
annotationHTTPEndpoint = "tailscale.com/http-endpoint"
labelDomain = "tailscale.com/domain"
)
var gaugePGIngressResources = clientmetric.NewGauge(kubetypes.MetricIngressPGResourceCount)
@@ -154,13 +156,13 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
pg := &tsapi.ProxyGroup{}
if err := r.Get(ctx, client.ObjectKey{Name: pgName}, pg); err != nil {
if apierrors.IsNotFound(err) {
logger.Infof("ProxyGroup %q does not exist", pgName)
logger.Infof("ProxyGroup does not exist")
return false, nil
}
return false, fmt.Errorf("getting ProxyGroup %q: %w", pgName, err)
}
if !tsoperator.ProxyGroupIsReady(pg) {
logger.Infof("ProxyGroup %q is not (yet) ready", pgName)
logger.Infof("ProxyGroup is not (yet) ready")
return false, nil
}
@@ -175,8 +177,6 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
r.recorder.Event(ing, corev1.EventTypeWarning, "HTTPSNotEnabled", "HTTPS is not enabled on the tailnet; ingress may not work")
}
logger = logger.With("proxy-group", pg.Name)
if !slices.Contains(ing.Finalizers, FinalizerNamePG) {
// This log line is printed exactly once during initial provisioning,
// because once the finalizer is in place this block gets skipped. So,
@@ -229,12 +229,11 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
return false, fmt.Errorf("error getting VIPService %q: %w", hostname, err)
}
}
// Generate the VIPService comment for new or existing VIPService. This
// checks and ensures that VIPService's owner references are updated for
// this Ingress and errors if that is not possible (i.e. because it
// appears that the VIPService has been created by a non-operator
// actor).
svcComment, err := r.ownerRefsComment(existingVIPSvc)
// Generate the VIPService owner annotation for new or existing VIPService.
// This checks and ensures that VIPService's owner references are updated
// for this Ingress and errors if that is not possible (i.e. because it
// appears that the VIPService has been created by a non-operator actor).
updatedAnnotations, err := r.ownerAnnotations(existingVIPSvc)
if err != nil {
const instr = "To proceed, you can either manually delete the existing VIPService or choose a different MagicDNS name at `.spec.tls.hosts[0] in the Ingress definition"
msg := fmt.Sprintf("error ensuring ownership of VIPService %s: %v. %s", hostname, err, instr)
@@ -242,8 +241,12 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
r.recorder.Event(ing, corev1.EventTypeWarning, "InvalidVIPService", msg)
return false, nil
}
// 3. Ensure that TLS Secret and RBAC exists
if err := r.ensureCertResources(ctx, pgName, dnsName, ing); err != nil {
return false, fmt.Errorf("error ensuring cert resources: %w", err)
}
// 3. Ensure that the serve config for the ProxyGroup contains the VIPService.
// 4. Ensure that the serve config for the ProxyGroup contains the VIPService.
cm, cfg, err := r.proxyGroupServeConfig(ctx, pgName)
if err != nil {
return false, fmt.Errorf("error getting Ingress serve config: %w", err)
@@ -310,11 +313,13 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
vipPorts = append(vipPorts, "80")
}
const managedVIPServiceComment = "This VIPService is managed by the Tailscale Kubernetes Operator, do not modify"
vipSvc := &tailscale.VIPService{
Name: serviceName,
Tags: tags,
Ports: vipPorts,
Comment: svcComment,
Name: serviceName,
Tags: tags,
Ports: vipPorts,
Comment: managedVIPServiceComment,
Annotations: updatedAnnotations,
}
if existingVIPSvc != nil {
vipSvc.Addrs = existingVIPSvc.Addrs
@@ -325,8 +330,8 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
if existingVIPSvc == nil ||
!reflect.DeepEqual(vipSvc.Tags, existingVIPSvc.Tags) ||
!reflect.DeepEqual(vipSvc.Ports, existingVIPSvc.Ports) ||
!strings.EqualFold(vipSvc.Comment, existingVIPSvc.Comment) {
logger.Infof("Ensuring VIPService %q exists and is up to date", hostname)
!ownersAreSetAndEqual(vipSvc, existingVIPSvc) {
logger.Infof("Ensuring VIPService exists and is up to date")
if err := r.tsClient.CreateOrUpdateVIPService(ctx, vipSvc); err != nil {
return false, fmt.Errorf("error creating VIPService: %w", err)
}
@@ -334,35 +339,67 @@ func (r *HAIngressReconciler) maybeProvision(ctx context.Context, hostname strin
// 5. Update tailscaled's AdvertiseServices config, which should add the VIPService
// IPs to the ProxyGroup Pods' AllowedIPs in the next netmap update if approved.
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, pg.Name, serviceName, true, logger); err != nil {
mode := serviceAdvertisementHTTPS
if isHTTPEndpointEnabled(ing) {
mode = serviceAdvertisementHTTPAndHTTPS
}
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, pg.Name, serviceName, mode, logger); err != nil {
return false, fmt.Errorf("failed to update tailscaled config: %w", err)
}
// TODO(irbekrm): check that the replicas are ready to route traffic for the VIPService before updating Ingress
// status.
// 6. Update Ingress status
// 6. Update Ingress status if ProxyGroup Pods are ready.
count, err := r.numberPodsAdvertising(ctx, pg.Name, serviceName)
if err != nil {
return false, fmt.Errorf("failed to check if any Pods are configured: %w", err)
}
oldStatus := ing.Status.DeepCopy()
ports := []networkingv1.IngressPortStatus{
{
Protocol: "TCP",
Port: 443,
},
switch count {
case 0:
ing.Status.LoadBalancer.Ingress = nil
default:
var ports []networkingv1.IngressPortStatus
hasCerts, err := r.hasCerts(ctx, serviceName)
if err != nil {
return false, fmt.Errorf("error checking TLS credentials provisioned for Ingress: %w", err)
}
// If TLS certs have not been issued (yet), do not set port 443.
if hasCerts {
ports = append(ports, networkingv1.IngressPortStatus{
Protocol: "TCP",
Port: 443,
})
}
if isHTTPEndpointEnabled(ing) {
ports = append(ports, networkingv1.IngressPortStatus{
Protocol: "TCP",
Port: 80,
})
}
// Set Ingress status hostname only if either port 443 or 80 is advertised.
var hostname string
if len(ports) != 0 {
hostname = dnsName
}
ing.Status.LoadBalancer.Ingress = []networkingv1.IngressLoadBalancerIngress{
{
Hostname: hostname,
Ports: ports,
},
}
}
if isHTTPEndpointEnabled(ing) {
ports = append(ports, networkingv1.IngressPortStatus{
Protocol: "TCP",
Port: 80,
})
}
ing.Status.LoadBalancer.Ingress = []networkingv1.IngressLoadBalancerIngress{
{
Hostname: dnsName,
Ports: ports,
},
}
if apiequality.Semantic.DeepEqual(oldStatus, ing.Status) {
if apiequality.Semantic.DeepEqual(oldStatus, &ing.Status) {
return svcsChanged, nil
}
const prefix = "Updating Ingress status"
if count == 0 {
logger.Infof("%s. No Pods are advertising VIPService yet", prefix)
} else {
logger.Infof("%s. %d Pod(s) advertising VIPService", prefix, count)
}
if err := r.Status().Update(ctx, ing); err != nil {
return false, fmt.Errorf("failed to update Ingress status: %w", err)
}
@@ -402,24 +439,24 @@ func (r *HAIngressReconciler) maybeCleanupProxyGroup(ctx context.Context, proxyG
logger.Infof("VIPService %q is not owned by any Ingress, cleaning up", vipServiceName)
// Delete the VIPService from control if necessary.
svc, _ := r.tsClient.GetVIPService(ctx, vipServiceName)
if svc != nil && isVIPServiceForAnyIngress(svc) {
logger.Infof("cleaning up orphaned VIPService %q", vipServiceName)
svcsChanged, err = r.cleanupVIPService(ctx, vipServiceName, logger)
if err != nil {
errResp := &tailscale.ErrResponse{}
if !errors.As(err, &errResp) || errResp.Status != http.StatusNotFound {
return false, fmt.Errorf("deleting VIPService %q: %w", vipServiceName, err)
}
}
svcsChanged, err = r.cleanupVIPService(ctx, vipServiceName, logger)
if err != nil {
return false, fmt.Errorf("deleting VIPService %q: %w", vipServiceName, err)
}
// Make sure the VIPService is not advertised in tailscaled or serve config.
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, proxyGroupName, vipServiceName, false, logger); err != nil {
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, proxyGroupName, vipServiceName, serviceAdvertisementOff, logger); err != nil {
return false, fmt.Errorf("failed to update tailscaled config services: %w", err)
}
delete(cfg.Services, vipServiceName)
serveConfigChanged = true
_, ok := cfg.Services[vipServiceName]
if ok {
logger.Infof("Removing VIPService %q from serve config", vipServiceName)
delete(cfg.Services, vipServiceName)
serveConfigChanged = true
}
if err := r.cleanupCertResources(ctx, proxyGroupName, vipServiceName); err != nil {
return false, fmt.Errorf("failed to clean up cert resources: %w", err)
}
}
}
@@ -480,16 +517,22 @@ func (r *HAIngressReconciler) maybeCleanup(ctx context.Context, hostname string,
if err != nil {
return false, fmt.Errorf("error deleting VIPService: %w", err)
}
// 3. Clean up any cluster resources
if err := r.cleanupCertResources(ctx, pg, serviceName); err != nil {
return false, fmt.Errorf("failed to clean up cert resources: %w", err)
}
if cfg == nil || cfg.Services == nil { // user probably deleted the ProxyGroup
return svcChanged, nil
}
// 3. Unadvertise the VIPService in tailscaled config.
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, pg, serviceName, false, logger); err != nil {
// 4. Unadvertise the VIPService in tailscaled config.
if err = r.maybeUpdateAdvertiseServicesConfig(ctx, pg, serviceName, serviceAdvertisementOff, logger); err != nil {
return false, fmt.Errorf("failed to update tailscaled config services: %w", err)
}
// 4. Remove the VIPService from the serve config for the ProxyGroup.
// 5. Remove the VIPService from the serve config for the ProxyGroup.
logger.Infof("Removing VIPService %q from serve config for ProxyGroup %q", hostname, pg)
delete(cfg.Services, serviceName)
cfgBytes, err := json.Marshal(cfg)
@@ -570,13 +613,6 @@ func (r *HAIngressReconciler) shouldExpose(ing *networkingv1.Ingress) bool {
return isTSIngress && pgAnnot != ""
}
func isVIPServiceForAnyIngress(svc *tailscale.VIPService) bool {
if svc == nil {
return false
}
return strings.HasPrefix(svc.Comment, "tailscale.com/k8s-operator:owned-by:")
}
// validateIngress validates that the Ingress is properly configured.
// Currently validates:
// - Any tags provided via tailscale.com/tags annotation are valid Tailscale ACL tags
@@ -650,34 +686,34 @@ func (r *HAIngressReconciler) cleanupVIPService(ctx context.Context, name tailcf
if svc == nil {
return false, nil
}
c, err := parseComment(svc)
o, err := parseOwnerAnnotation(svc)
if err != nil {
return false, fmt.Errorf("error parsing VIPService comment")
return false, fmt.Errorf("error parsing VIPService owner annotation")
}
if c == nil || len(c.OwnerRefs) == 0 {
if o == nil || len(o.OwnerRefs) == 0 {
return false, nil
}
// Comparing with the operatorID only means that we will not be able to
// clean up VIPServices in cases where the operator was deleted from the
// cluster before deleting the Ingress. Perhaps the comparison could be
// 'if or.OperatorID === r.operatorID || or.ingressUID == r.ingressUID'.
ix := slices.IndexFunc(c.OwnerRefs, func(or OwnerRef) bool {
ix := slices.IndexFunc(o.OwnerRefs, func(or OwnerRef) bool {
return or.OperatorID == r.operatorID
})
if ix == -1 {
return false, nil
}
if len(c.OwnerRefs) == 1 {
if len(o.OwnerRefs) == 1 {
logger.Infof("Deleting VIPService %q", name)
return false, r.tsClient.DeleteVIPService(ctx, name)
}
c.OwnerRefs = slices.Delete(c.OwnerRefs, ix, ix+1)
o.OwnerRefs = slices.Delete(o.OwnerRefs, ix, ix+1)
logger.Infof("Deleting VIPService %q", name)
json, err := json.Marshal(c)
json, err := json.Marshal(o)
if err != nil {
return false, fmt.Errorf("error marshalling updated VIPService owner reference: %w", err)
}
svc.Comment = string(json)
svc.Annotations[ownerAnnotation] = string(json)
return true, r.tsClient.CreateOrUpdateVIPService(ctx, svc)
}
@@ -689,8 +725,16 @@ func isHTTPEndpointEnabled(ing *networkingv1.Ingress) bool {
return ing.Annotations[annotationHTTPEndpoint] == "enabled"
}
func (a *HAIngressReconciler) maybeUpdateAdvertiseServicesConfig(ctx context.Context, pgName string, serviceName tailcfg.ServiceName, shouldBeAdvertised bool, logger *zap.SugaredLogger) (err error) {
logger.Debugf("Updating ProxyGroup tailscaled configs to advertise service %q: %v", serviceName, shouldBeAdvertised)
// serviceAdvertisementMode describes the desired state of a VIPService.
type serviceAdvertisementMode int
const (
serviceAdvertisementOff serviceAdvertisementMode = iota // Should not be advertised
serviceAdvertisementHTTPS // Port 443 should be advertised
serviceAdvertisementHTTPAndHTTPS // Both ports 80 and 443 should be advertised
)
func (a *HAIngressReconciler) maybeUpdateAdvertiseServicesConfig(ctx context.Context, pgName string, serviceName tailcfg.ServiceName, mode serviceAdvertisementMode, logger *zap.SugaredLogger) (err error) {
// Get all config Secrets for this ProxyGroup.
secrets := &corev1.SecretList{}
@@ -698,6 +742,21 @@ func (a *HAIngressReconciler) maybeUpdateAdvertiseServicesConfig(ctx context.Con
return fmt.Errorf("failed to list config Secrets: %w", err)
}
// Verify that TLS cert for the VIPService has been successfully issued
// before attempting to advertise the service.
// This is so that in multi-cluster setups where some Ingresses succeed
// to issue certs and some do not (rate limits), clients are not pinned
// to a backend that is not able to serve HTTPS.
// The only exception is Ingresses with an HTTP endpoint enabled - if an
// Ingress has an HTTP endpoint enabled, it will be advertised even if the
// TLS cert is not yet provisioned.
hasCert, err := a.hasCerts(ctx, serviceName)
if err != nil {
return fmt.Errorf("error checking TLS credentials provisioned for service %q: %w", serviceName, err)
}
shouldBeAdvertised := (mode == serviceAdvertisementHTTPAndHTTPS) ||
(mode == serviceAdvertisementHTTPS && hasCert) // if we only expose port 443 and don't have certs (yet), do not advertise
for _, secret := range secrets.Items {
var updated bool
for fileName, confB := range secret.Data {
@@ -740,6 +799,39 @@ func (a *HAIngressReconciler) maybeUpdateAdvertiseServicesConfig(ctx context.Con
return nil
}
func (a *HAIngressReconciler) numberPodsAdvertising(ctx context.Context, pgName string, serviceName tailcfg.ServiceName) (int, error) {
// Get all state Secrets for this ProxyGroup.
secrets := &corev1.SecretList{}
if err := a.List(ctx, secrets, client.InNamespace(a.tsNamespace), client.MatchingLabels(pgSecretLabels(pgName, "state"))); err != nil {
return 0, fmt.Errorf("failed to list ProxyGroup %q state Secrets: %w", pgName, err)
}
var count int
for _, secret := range secrets.Items {
prefs, ok, err := getDevicePrefs(&secret)
if err != nil {
return 0, fmt.Errorf("error getting node metadata: %w", err)
}
if !ok {
continue
}
if slices.Contains(prefs.AdvertiseServices, serviceName.String()) {
count++
}
}
return count, nil
}
const ownerAnnotation = "tailscale.com/owner-references"
// ownerAnnotationValue is the content of the VIPService.Annotation[ownerAnnotation] field.
type ownerAnnotationValue struct {
// OwnerRefs is a list of owner references that identify all operator
// instances that manage this VIPService.
OwnerRefs []OwnerRef `json:"ownerRefs,omitempty"`
}
// OwnerRef is an owner reference that uniquely identifies a Tailscale
// Kubernetes operator instance.
type OwnerRef struct {
@@ -747,60 +839,110 @@ type OwnerRef struct {
OperatorID string `json:"operatorID,omitempty"`
}
// comment is the content of the VIPService.Comment field.
type comment struct {
// OwnerRefs is a list of owner references that identify all operator
// instances that manage this VIPService.
OwnerRefs []OwnerRef `json:"ownerRefs,omitempty"`
}
// ownerRefsComment return VIPService Comment that includes owner reference for this
// operator instance for the provided VIPService. If the VIPService is nil, a
// new comment with owner ref is returned. If the VIPService is not nil, the
// existing comment is returned with the owner reference added, if not already
// present. If the VIPService is not nil, but does not contain a comment we
// return an error as this likely means that the VIPService was created by
// somthing other than a Tailscale Kubernetes operator.
func (r *HAIngressReconciler) ownerRefsComment(svc *tailscale.VIPService) (string, error) {
// ownerAnnotations returns the updated annotations required to ensure this
// instance of the operator is included as an owner. If the VIPService is not
// nil, but does not contain an owner we return an error as this likely means
// that the VIPService was created by somthing other than a Tailscale
// Kubernetes operator.
func (r *HAIngressReconciler) ownerAnnotations(svc *tailscale.VIPService) (map[string]string, error) {
ref := OwnerRef{
OperatorID: r.operatorID,
}
if svc == nil {
c := &comment{OwnerRefs: []OwnerRef{ref}}
c := ownerAnnotationValue{OwnerRefs: []OwnerRef{ref}}
json, err := json.Marshal(c)
if err != nil {
return "", fmt.Errorf("[unexpected] unable to marshal VIPService comment contents: %w, please report this", err)
return nil, fmt.Errorf("[unexpected] unable to marshal VIPService owner annotation contents: %w, please report this", err)
}
return string(json), nil
return map[string]string{
ownerAnnotation: string(json),
}, nil
}
c, err := parseComment(svc)
o, err := parseOwnerAnnotation(svc)
if err != nil {
return "", fmt.Errorf("error parsing existing VIPService comment: %w", err)
return nil, err
}
if c == nil || len(c.OwnerRefs) == 0 {
return "", fmt.Errorf("VIPService %s exists, but does not contain Comment field with owner references- not proceeding as this is likely a resource created by something other than a Tailscale Kubernetes Operator", svc.Name)
if o == nil || len(o.OwnerRefs) == 0 {
return nil, fmt.Errorf("VIPService %s exists, but does not contain owner annotation with owner references; not proceeding as this is likely a resource created by something other than the Tailscale Kubernetes operator", svc.Name)
}
if slices.Contains(c.OwnerRefs, ref) { // up to date
return svc.Comment, nil
if slices.Contains(o.OwnerRefs, ref) { // up to date
return svc.Annotations, nil
}
c.OwnerRefs = append(c.OwnerRefs, ref)
json, err := json.Marshal(c)
o.OwnerRefs = append(o.OwnerRefs, ref)
json, err := json.Marshal(o)
if err != nil {
return "", fmt.Errorf("error marshalling updated owner references: %w", err)
return nil, fmt.Errorf("error marshalling updated owner references: %w", err)
}
return string(json), nil
newAnnots := make(map[string]string, len(svc.Annotations)+1)
for k, v := range svc.Annotations {
newAnnots[k] = v
}
newAnnots[ownerAnnotation] = string(json)
return newAnnots, nil
}
// parseComment returns VIPService comment or nil if none found or not matching the expected format.
func parseComment(vipSvc *tailscale.VIPService) (*comment, error) {
if vipSvc.Comment == "" {
// parseOwnerAnnotation returns nil if no valid owner found.
func parseOwnerAnnotation(vipSvc *tailscale.VIPService) (*ownerAnnotationValue, error) {
if vipSvc.Annotations == nil || vipSvc.Annotations[ownerAnnotation] == "" {
return nil, nil
}
c := &comment{}
if err := json.Unmarshal([]byte(vipSvc.Comment), c); err != nil {
return nil, fmt.Errorf("error parsing VIPService Comment field %q: %w", vipSvc.Comment, err)
o := &ownerAnnotationValue{}
if err := json.Unmarshal([]byte(vipSvc.Annotations[ownerAnnotation]), o); err != nil {
return nil, fmt.Errorf("error parsing VIPService %s annotation %q: %w", ownerAnnotation, vipSvc.Annotations[ownerAnnotation], err)
}
return c, nil
return o, nil
}
func ownersAreSetAndEqual(a, b *tailscale.VIPService) bool {
return a != nil && b != nil &&
a.Annotations != nil && b.Annotations != nil &&
a.Annotations[ownerAnnotation] != "" &&
b.Annotations[ownerAnnotation] != "" &&
strings.EqualFold(a.Annotations[ownerAnnotation], b.Annotations[ownerAnnotation])
}
// ensureCertResources ensures that the TLS Secret for an HA Ingress and RBAC
// resources that allow proxies to manage the Secret are created.
// Note that Tailscale VIPService name validation matches Kubernetes
// resource name validation, so we can be certain that the VIPService name
// (domain) is a valid Kubernetes resource name.
// https://github.com/tailscale/tailscale/blob/8b1e7f646ee4730ad06c9b70c13e7861b964949b/util/dnsname/dnsname.go#L99
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names
func (r *HAIngressReconciler) ensureCertResources(ctx context.Context, pgName, domain string, ing *networkingv1.Ingress) error {
secret := certSecret(pgName, r.tsNamespace, domain, ing)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, secret, nil); err != nil {
return fmt.Errorf("failed to create or update Secret %s: %w", secret.Name, err)
}
role := certSecretRole(pgName, r.tsNamespace, domain)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, role, nil); err != nil {
return fmt.Errorf("failed to create or update Role %s: %w", role.Name, err)
}
rb := certSecretRoleBinding(pgName, r.tsNamespace, domain)
if _, err := createOrUpdate(ctx, r.Client, r.tsNamespace, rb, nil); err != nil {
return fmt.Errorf("failed to create or update RoleBinding %s: %w", rb.Name, err)
}
return nil
}
// cleanupCertResources ensures that the TLS Secret and associated RBAC
// resources that allow proxies to read/write to the Secret are deleted.
func (r *HAIngressReconciler) cleanupCertResources(ctx context.Context, pgName string, name tailcfg.ServiceName) error {
domainName, err := r.dnsNameForService(ctx, tailcfg.ServiceName(name))
if err != nil {
return fmt.Errorf("error getting DNS name for VIPService %s: %w", name, err)
}
labels := certResourceLabels(pgName, domainName)
if err := r.DeleteAllOf(ctx, &rbacv1.RoleBinding{}, client.InNamespace(r.tsNamespace), client.MatchingLabels(labels)); err != nil {
return fmt.Errorf("error deleting RoleBinding for domain name %s: %w", domainName, err)
}
if err := r.DeleteAllOf(ctx, &rbacv1.Role{}, client.InNamespace(r.tsNamespace), client.MatchingLabels(labels)); err != nil {
return fmt.Errorf("error deleting Role for domain name %s: %w", domainName, err)
}
if err := r.DeleteAllOf(ctx, &corev1.Secret{}, client.InNamespace(r.tsNamespace), client.MatchingLabels(labels)); err != nil {
return fmt.Errorf("error deleting Secret for domain name %s: %w", domainName, err)
}
return nil
}
// requeueInterval returns a time duration between 5 and 10 minutes, which is
@@ -811,3 +953,123 @@ func parseComment(vipSvc *tailscale.VIPService) (*comment, error) {
func requeueInterval() time.Duration {
return time.Duration(rand.N(5)+5) * time.Minute
}
// certSecretRole creates a Role that will allow proxies to manage the TLS
// Secret for the given domain. Domain must be a valid Kubernetes resource name.
func certSecretRole(pgName, namespace, domain string) *rbacv1.Role {
return &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: domain,
Namespace: namespace,
Labels: certResourceLabels(pgName, domain),
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
ResourceNames: []string{domain},
Verbs: []string{
"get",
"list",
"patch",
"update",
},
},
},
}
}
// certSecretRoleBinding creates a RoleBinding for Role that will allow proxies
// to manage the TLS Secret for the given domain. Domain must be a valid
// Kubernetes resource name.
func certSecretRoleBinding(pgName, namespace, domain string) *rbacv1.RoleBinding {
return &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: domain,
Namespace: namespace,
Labels: certResourceLabels(pgName, domain),
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: pgName,
Namespace: namespace,
},
},
RoleRef: rbacv1.RoleRef{
Kind: "Role",
Name: domain,
},
}
}
// certSecret creates a Secret that will store the TLS certificate and private
// key for the given domain. Domain must be a valid Kubernetes resource name.
func certSecret(pgName, namespace, domain string, ing *networkingv1.Ingress) *corev1.Secret {
labels := certResourceLabels(pgName, domain)
labels[kubetypes.LabelSecretType] = "certs"
// Labels that let us identify the Ingress resource lets us reconcile
// the Ingress when the TLS Secret is updated (for example, when TLS
// certs have been provisioned).
labels[LabelParentName] = ing.Name
labels[LabelParentNamespace] = ing.Namespace
return &corev1.Secret{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
Kind: "Secret",
},
ObjectMeta: metav1.ObjectMeta{
Name: domain,
Namespace: namespace,
Labels: labels,
},
Data: map[string][]byte{
corev1.TLSCertKey: nil,
corev1.TLSPrivateKeyKey: nil,
},
Type: corev1.SecretTypeTLS,
}
}
func certResourceLabels(pgName, domain string) map[string]string {
return map[string]string{
kubetypes.LabelManaged: "true",
labelProxyGroup: pgName,
labelDomain: domain,
}
}
// dnsNameForService returns the DNS name for the given VIPService name.
func (r *HAIngressReconciler) dnsNameForService(ctx context.Context, svc tailcfg.ServiceName) (string, error) {
s := svc.WithoutPrefix()
tcd, err := r.tailnetCertDomain(ctx)
if err != nil {
return "", fmt.Errorf("error determining DNS name base: %w", err)
}
return s + "." + tcd, nil
}
// hasCerts checks if the TLS Secret for the given service has non-zero cert and key data.
func (r *HAIngressReconciler) hasCerts(ctx context.Context, svc tailcfg.ServiceName) (bool, error) {
domain, err := r.dnsNameForService(ctx, svc)
if err != nil {
return false, fmt.Errorf("failed to get DNS name for service: %w", err)
}
secret := &corev1.Secret{}
err = r.Get(ctx, client.ObjectKey{
Namespace: r.tsNamespace,
Name: domain,
}, secret)
if err != nil {
if apierrors.IsNotFound(err) {
return false, nil
}
return false, fmt.Errorf("failed to get TLS Secret: %w", err)
}
cert := secret.Data[corev1.TLSCertKey]
key := secret.Data[corev1.TLSPrivateKeyKey]
return len(cert) > 0 && len(key) > 0, nil
}

View File

@@ -8,8 +8,10 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"maps"
"net/http"
"reflect"
"testing"
@@ -18,6 +20,7 @@ import (
"go.uber.org/zap"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
@@ -28,6 +31,7 @@ import (
"tailscale.com/ipn/ipnstate"
tsoperator "tailscale.com/k8s-operator"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg"
"tailscale.com/types/ptr"
)
@@ -56,7 +60,7 @@ func TestIngressPGReconciler(t *testing.T) {
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"my-svc.tailnetxyz.ts.net"}},
{Hosts: []string{"my-svc"}},
},
},
}
@@ -64,10 +68,17 @@ func TestIngressPGReconciler(t *testing.T) {
// Verify initial reconciliation
expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress")
verifyServeConfig(t, fc, "svc:my-svc", false)
verifyVIPService(t, ft, "svc:my-svc", []string{"443"})
verifyTailscaledConfig(t, fc, []string{"svc:my-svc"})
// Verify that Role and RoleBinding have been created for the first Ingress.
// Do not verify the cert Secret as that was already verified implicitly above.
expectEqual(t, fc, certSecretRole("test-pg", "operator-ns", "my-svc.ts.net"))
expectEqual(t, fc, certSecretRoleBinding("test-pg", "operator-ns", "my-svc.ts.net"))
mustUpdate(t, fc, "default", "test-ingress", func(ing *networkingv1.Ingress) {
ing.Annotations["tailscale.com/tags"] = "tag:custom,tag:test"
})
@@ -119,9 +130,16 @@ func TestIngressPGReconciler(t *testing.T) {
// Verify second Ingress reconciliation
expectReconciled(t, ingPGR, "default", "my-other-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-other-svc.ts.net")
expectReconciled(t, ingPGR, "default", "my-other-ingress")
verifyServeConfig(t, fc, "svc:my-other-svc", false)
verifyVIPService(t, ft, "svc:my-other-svc", []string{"443"})
// Verify that Role and RoleBinding have been created for the first Ingress.
// Do not verify the cert Secret as that was already verified implicitly above.
expectEqual(t, fc, certSecretRole("test-pg", "operator-ns", "my-other-svc.ts.net"))
expectEqual(t, fc, certSecretRoleBinding("test-pg", "operator-ns", "my-other-svc.ts.net"))
// Verify first Ingress is still working
verifyServeConfig(t, fc, "svc:my-svc", false)
verifyVIPService(t, ft, "svc:my-svc", []string{"443"})
@@ -158,6 +176,9 @@ func TestIngressPGReconciler(t *testing.T) {
}
verifyTailscaledConfig(t, fc, []string{"svc:my-svc"})
expectMissing[corev1.Secret](t, fc, "operator-ns", "my-other-svc.ts.net")
expectMissing[rbacv1.Role](t, fc, "operator-ns", "my-other-svc.ts.net")
expectMissing[rbacv1.RoleBinding](t, fc, "operator-ns", "my-other-svc.ts.net")
// Delete the first Ingress and verify cleanup
if err := fc.Delete(context.Background(), ing); err != nil {
@@ -184,6 +205,70 @@ func TestIngressPGReconciler(t *testing.T) {
t.Error("serve config not cleaned up")
}
verifyTailscaledConfig(t, fc, nil)
// Add verification that cert resources were cleaned up
expectMissing[corev1.Secret](t, fc, "operator-ns", "my-svc.ts.net")
expectMissing[rbacv1.Role](t, fc, "operator-ns", "my-svc.ts.net")
expectMissing[rbacv1.RoleBinding](t, fc, "operator-ns", "my-svc.ts.net")
}
func TestIngressPGReconciler_UpdateIngressHostname(t *testing.T) {
ingPGR, fc, ft := setupIngressTest(t)
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test-ingress",
Namespace: "default",
UID: types.UID("1234-UID"),
Annotations: map[string]string{
"tailscale.com/proxy-group": "test-pg",
},
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"my-svc"}},
},
},
}
mustCreate(t, fc, ing)
// Verify initial reconciliation
expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress")
verifyServeConfig(t, fc, "svc:my-svc", false)
verifyVIPService(t, ft, "svc:my-svc", []string{"443"})
verifyTailscaledConfig(t, fc, []string{"svc:my-svc"})
// Update the Ingress hostname and make sure the original VIPService is deleted.
mustUpdate(t, fc, "default", "test-ingress", func(ing *networkingv1.Ingress) {
ing.Spec.TLS[0].Hosts[0] = "updated-svc"
})
expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "updated-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress")
verifyServeConfig(t, fc, "svc:updated-svc", false)
verifyVIPService(t, ft, "svc:updated-svc", []string{"443"})
verifyTailscaledConfig(t, fc, []string{"svc:updated-svc"})
_, err := ft.GetVIPService(context.Background(), tailcfg.ServiceName("svc:my-svc"))
if err == nil {
t.Fatalf("svc:my-svc not cleaned up")
}
var errResp *tailscale.ErrResponse
if !errors.As(err, &errResp) || errResp.Status != http.StatusNotFound {
t.Fatalf("unexpected error: %v", err)
}
}
func TestValidateIngress(t *testing.T) {
@@ -392,6 +477,8 @@ func TestIngressPGReconciler_HTTPEndpoint(t *testing.T) {
// Verify initial reconciliation with HTTP enabled
expectReconciled(t, ingPGR, "default", "test-ingress")
populateTLSSecret(context.Background(), fc, "test-pg", "my-svc.ts.net")
expectReconciled(t, ingPGR, "default", "test-ingress")
verifyVIPService(t, ft, "svc:my-svc", []string{"80", "443"})
verifyServeConfig(t, fc, "svc:my-svc", true)
@@ -404,6 +491,31 @@ func TestIngressPGReconciler_HTTPEndpoint(t *testing.T) {
t.Fatal(err)
}
// Status will be empty until the VIPService shows up in prefs.
if !reflect.DeepEqual(ing.Status.LoadBalancer.Ingress, []networkingv1.IngressLoadBalancerIngress(nil)) {
t.Errorf("incorrect Ingress status: got %v, want empty",
ing.Status.LoadBalancer.Ingress)
}
// Add the VIPService to prefs to have the Ingress recognised as ready.
mustCreate(t, fc, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "test-pg-0",
Namespace: "operator-ns",
Labels: pgSecretLabels("test-pg", "state"),
},
Data: map[string][]byte{
"_current-profile": []byte("profile-foo"),
"profile-foo": []byte(`{"AdvertiseServices":["svc:my-svc"],"Config":{"NodeID":"node-foo"}}`),
},
})
// Reconcile and re-fetch Ingress.
expectReconciled(t, ingPGR, "default", "test-ingress")
if err := fc.Get(context.Background(), client.ObjectKeyFromObject(ing), ing); err != nil {
t.Fatal(err)
}
wantStatus := []networkingv1.IngressPortStatus{
{Port: 443, Protocol: "TCP"},
{Port: 80, Protocol: "TCP"},
@@ -510,6 +622,7 @@ func verifyServeConfig(t *testing.T, fc client.Client, serviceName string, wantH
}
func verifyTailscaledConfig(t *testing.T, fc client.Client, expectedServices []string) {
t.Helper()
var expected string
if expectedServices != nil {
expectedServicesJSON, err := json.Marshal(expectedServices)
@@ -644,8 +757,10 @@ func TestIngressPGReconciler_MultiCluster(t *testing.T) {
// Simulate existing VIPService from another cluster
existingVIPSvc := &tailscale.VIPService{
Name: "svc:my-svc",
Comment: `{"ownerrefs":[{"operatorID":"operator-2"}]}`,
Name: "svc:my-svc",
Annotations: map[string]string{
ownerAnnotation: `{"ownerrefs":[{"operatorID":"operator-2"}]}`,
},
}
ft.vipServices = map[tailcfg.ServiceName]*tailscale.VIPService{
"svc:my-svc": existingVIPSvc,
@@ -662,17 +777,17 @@ func TestIngressPGReconciler_MultiCluster(t *testing.T) {
t.Fatal("VIPService not found")
}
c := &comment{}
if err := json.Unmarshal([]byte(vipSvc.Comment), c); err != nil {
t.Fatalf("parsing comment: %v", err)
o, err := parseOwnerAnnotation(vipSvc)
if err != nil {
t.Fatalf("parsing owner annotation: %v", err)
}
wantOwnerRefs := []OwnerRef{
{OperatorID: "operator-2"},
{OperatorID: "operator-1"},
}
if !reflect.DeepEqual(c.OwnerRefs, wantOwnerRefs) {
t.Errorf("incorrect owner refs\ngot: %+v\nwant: %+v", c.OwnerRefs, wantOwnerRefs)
if !reflect.DeepEqual(o.OwnerRefs, wantOwnerRefs) {
t.Errorf("incorrect owner refs\ngot: %+v\nwant: %+v", o.OwnerRefs, wantOwnerRefs)
}
// Delete the Ingress and verify VIPService still exists with one owner ref
@@ -689,15 +804,40 @@ func TestIngressPGReconciler_MultiCluster(t *testing.T) {
t.Fatal("VIPService was incorrectly deleted")
}
c = &comment{}
if err := json.Unmarshal([]byte(vipSvc.Comment), c); err != nil {
t.Fatalf("parsing comment after deletion: %v", err)
o, err = parseOwnerAnnotation(vipSvc)
if err != nil {
t.Fatalf("parsing owner annotation: %v", err)
}
wantOwnerRefs = []OwnerRef{
{OperatorID: "operator-2"},
}
if !reflect.DeepEqual(c.OwnerRefs, wantOwnerRefs) {
t.Errorf("incorrect owner refs after deletion\ngot: %+v\nwant: %+v", c.OwnerRefs, wantOwnerRefs)
if !reflect.DeepEqual(o.OwnerRefs, wantOwnerRefs) {
t.Errorf("incorrect owner refs after deletion\ngot: %+v\nwant: %+v", o.OwnerRefs, wantOwnerRefs)
}
}
func populateTLSSecret(ctx context.Context, c client.Client, pgName, domain string) error {
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: domain,
Namespace: "operator-ns",
Labels: map[string]string{
kubetypes.LabelManaged: "true",
labelProxyGroup: pgName,
labelDomain: domain,
kubetypes.LabelSecretType: "certs",
},
},
Type: corev1.SecretTypeTLS,
Data: map[string][]byte{
corev1.TLSCertKey: []byte("fake-cert"),
corev1.TLSPrivateKeyKey: []byte("fake-key"),
},
}
_, err := createOrUpdate(ctx, c, "operator-ns", secret, func(s *corev1.Secret) {
s.Data = secret.Data
})
return err
}

View File

@@ -6,6 +6,7 @@
package main
import (
"context"
"testing"
"go.uber.org/zap"
@@ -15,17 +16,18 @@ import (
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"tailscale.com/ipn"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
"tailscale.com/util/mak"
)
func TestTailscaleIngress(t *testing.T) {
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewFakeClient(tsIngressClass)
fc := fake.NewFakeClient(ingressClass())
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
@@ -46,45 +48,8 @@ func TestTailscaleIngress(t *testing.T) {
}
// 1. Resources get created for regular Ingress
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
mustCreate(t, fc, ing)
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
})
mustCreate(t, fc, ingress())
mustCreate(t, fc, service())
expectReconciled(t, ingR, "default", "test")
@@ -114,6 +79,9 @@ func TestTailscaleIngress(t *testing.T) {
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
})
expectReconciled(t, ingR, "default", "test")
// Get the ingress and update it with expected changes
ing := ingress()
ing.Finalizers = append(ing.Finalizers, "tailscale.com/finalizer")
ing.Status.LoadBalancer = networkingv1.IngressLoadBalancerStatus{
Ingress: []networkingv1.IngressLoadBalancerIngress{
@@ -143,8 +111,7 @@ func TestTailscaleIngress(t *testing.T) {
}
func TestTailscaleIngressHostname(t *testing.T) {
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewFakeClient(tsIngressClass)
fc := fake.NewFakeClient(ingressClass())
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
@@ -165,45 +132,8 @@ func TestTailscaleIngressHostname(t *testing.T) {
}
// 1. Resources get created for regular Ingress
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
mustCreate(t, fc, ing)
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
})
mustCreate(t, fc, ingress())
mustCreate(t, fc, service())
expectReconciled(t, ingR, "default", "test")
@@ -241,8 +171,10 @@ func TestTailscaleIngressHostname(t *testing.T) {
mak.Set(&secret.Data, "device_fqdn", []byte("foo.tailnetxyz.ts.net"))
})
expectReconciled(t, ingR, "default", "test")
ing.Finalizers = append(ing.Finalizers, "tailscale.com/finalizer")
// Get the ingress and update it with expected changes
ing := ingress()
ing.Finalizers = append(ing.Finalizers, "tailscale.com/finalizer")
expectEqual(t, fc, ing)
// 3. Ingress proxy with capability version >= 110 advertises HTTPS endpoint
@@ -299,10 +231,9 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
Annotations: map[string]string{"bar.io/foo": "some-val"},
Pod: &tsapi.Pod{Annotations: map[string]string{"foo.io/bar": "some-val"}}}},
}
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc, tsIngressClass).
WithObjects(pc, ingressClass()).
WithStatusSubresource(pc).
Build()
ft := &fakeTSClient{}
@@ -326,45 +257,8 @@ func TestTailscaleIngressWithProxyClass(t *testing.T) {
// 1. Ingress is created with no ProxyClass specified, default proxy
// resources get configured.
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
mustCreate(t, fc, ing)
mustCreate(t, fc, &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
})
mustCreate(t, fc, ingress())
mustCreate(t, fc, service())
expectReconciled(t, ingR, "default", "test")
@@ -432,54 +326,19 @@ func TestTailscaleIngressWithServiceMonitor(t *testing.T) {
ObservedGeneration: 1,
}}},
}
ing := &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
// The apiserver is supposed to set the UID, but the fake client
// doesn't. So, set it explicitly because other code later depends
// on it being set.
UID: types.UID("1234-UID"),
Labels: map[string]string{
"tailscale.com/proxy-class": "metrics",
},
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
}
crd := &apiextensionsv1.CustomResourceDefinition{ObjectMeta: metav1.ObjectMeta{Name: serviceMonitorCRD}}
tsIngressClass := &networkingv1.IngressClass{ObjectMeta: metav1.ObjectMeta{Name: "tailscale"}, Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"}}
// Create fake client with ProxyClass, IngressClass, Ingress with metrics ProxyClass, and Service
ing := ingress()
ing.Labels = map[string]string{
LabelProxyClass: "metrics",
}
fc := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme).
WithObjects(pc, tsIngressClass, ing, svc).
WithObjects(pc, ingressClass(), ing, service()).
WithStatusSubresource(pc).
Build()
ft := &fakeTSClient{}
fakeTsnetServer := &fakeTSNetServer{certDomains: []string{"foo.com"}}
zl, err := zap.NewDevelopment()
@@ -560,3 +419,118 @@ func TestTailscaleIngressWithServiceMonitor(t *testing.T) {
expectMissing[corev1.Service](t, fc, "operator-ns", metricsResourceName(shortName))
// ServiceMonitor gets garbage collected when the Service is deleted - we cannot test that here.
}
func TestIngressLetsEncryptStaging(t *testing.T) {
cl := tstest.NewClock(tstest.ClockOpts{})
zl := zap.Must(zap.NewDevelopment())
pcLEStaging, pcLEStagingFalse, pcOther := proxyClassesForLEStagingTest()
testCases := testCasesForLEStagingTests(pcLEStaging, pcLEStagingFalse, pcOther)
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
builder := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme)
builder = builder.WithObjects(pcLEStaging, pcLEStagingFalse, pcOther).
WithStatusSubresource(pcLEStaging, pcLEStagingFalse, pcOther)
fc := builder.Build()
if tt.proxyClassPerResource != "" || tt.defaultProxyClass != "" {
name := tt.proxyClassPerResource
if name == "" {
name = tt.defaultProxyClass
}
setProxyClassReady(t, fc, cl, name)
}
mustCreate(t, fc, ingressClass())
mustCreate(t, fc, service())
ing := ingress()
if tt.proxyClassPerResource != "" {
ing.Labels = map[string]string{
LabelProxyClass: tt.proxyClassPerResource,
}
}
mustCreate(t, fc, ing)
ingR := &IngressReconciler{
Client: fc,
ssr: &tailscaleSTSReconciler{
Client: fc,
tsClient: &fakeTSClient{},
tsnetServer: &fakeTSNetServer{certDomains: []string{"test-host"}},
defaultTags: []string{"tag:test"},
operatorNamespace: "operator-ns",
proxyImage: "tailscale/tailscale:test",
},
logger: zl.Sugar(),
defaultProxyClass: tt.defaultProxyClass,
}
expectReconciled(t, ingR, "default", "test")
_, shortName := findGenName(t, fc, "default", "test", "ingress")
sts := &appsv1.StatefulSet{}
if err := fc.Get(context.Background(), client.ObjectKey{Namespace: "operator-ns", Name: shortName}, sts); err != nil {
t.Fatalf("failed to get StatefulSet: %v", err)
}
if tt.useLEStagingEndpoint {
verifyEnvVar(t, sts, "TS_DEBUG_ACME_DIRECTORY_URL", letsEncryptStagingEndpoint)
} else {
verifyEnvVarNotPresent(t, sts, "TS_DEBUG_ACME_DIRECTORY_URL")
}
})
}
}
func ingressClass() *networkingv1.IngressClass {
return &networkingv1.IngressClass{
ObjectMeta: metav1.ObjectMeta{Name: "tailscale"},
Spec: networkingv1.IngressClassSpec{Controller: "tailscale.com/ts-ingress"},
}
}
func service() *corev1.Service {
return &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
},
Spec: corev1.ServiceSpec{
ClusterIP: "1.2.3.4",
Ports: []corev1.ServicePort{{
Port: 8080,
Name: "http"},
},
},
}
}
func ingress() *networkingv1.Ingress {
return &networkingv1.Ingress{
TypeMeta: metav1.TypeMeta{Kind: "Ingress", APIVersion: "networking.k8s.io/v1"},
ObjectMeta: metav1.ObjectMeta{
Name: "test",
Namespace: "default",
UID: types.UID("1234-UID"),
},
Spec: networkingv1.IngressSpec{
IngressClassName: ptr.To("tailscale"),
DefaultBackend: &networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
TLS: []networkingv1.IngressTLS{
{Hosts: []string{"default-test"}},
},
},
}
}

View File

@@ -19,6 +19,7 @@ import (
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
)
const (
@@ -222,7 +223,7 @@ func metricsResourceName(stsName string) string {
// proxy.
func metricsResourceLabels(opts *metricsOpts) map[string]string {
lbls := map[string]string{
LabelManaged: "true",
kubetypes.LabelManaged: "true",
labelMetricsTarget: opts.proxyStsName,
labelPromProxyType: opts.proxyType,
labelPromProxyParentName: opts.proxyLabels[LabelParentName],

View File

@@ -9,7 +9,6 @@ package main
import (
"context"
"fmt"
"net/http"
"os"
"regexp"
@@ -40,7 +39,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"tailscale.com/client/local"
"tailscale.com/client/tailscale"
"tailscale.com/hostinfo"
"tailscale.com/ipn"
@@ -333,39 +331,6 @@ func runReconcilers(opts reconcilerOpts) {
if err != nil {
startlog.Fatalf("could not create ingress reconciler: %v", err)
}
lc, err := opts.tsServer.LocalClient()
if err != nil {
startlog.Fatalf("could not get local client: %v", err)
}
id, err := id(context.Background(), lc)
if err != nil {
startlog.Fatalf("error determining stable ID of the operator's Tailscale device: %v", err)
}
ingressProxyGroupFilter := handler.EnqueueRequestsFromMapFunc(ingressesFromIngressProxyGroup(mgr.GetClient(), opts.log))
err = builder.
ControllerManagedBy(mgr).
For(&networkingv1.Ingress{}).
Named("ingress-pg-reconciler").
Watches(&corev1.Service{}, handler.EnqueueRequestsFromMapFunc(serviceHandlerForIngressPG(mgr.GetClient(), startlog))).
Watches(&tsapi.ProxyGroup{}, ingressProxyGroupFilter).
Complete(&HAIngressReconciler{
recorder: eventRecorder,
tsClient: opts.tsClient,
tsnetServer: opts.tsServer,
defaultTags: strings.Split(opts.proxyTags, ","),
Client: mgr.GetClient(),
logger: opts.log.Named("ingress-pg-reconciler"),
lc: lc,
operatorID: id,
tsNamespace: opts.tailscaleNamespace,
})
if err != nil {
startlog.Fatalf("could not create ingress-pg-reconciler: %v", err)
}
if err := mgr.GetFieldIndexer().IndexField(context.Background(), new(networkingv1.Ingress), indexIngressProxyGroup, indexPGIngresses); err != nil {
startlog.Fatalf("failed setting up indexer for HA Ingresses: %v", err)
}
connectorFilter := handler.EnqueueRequestsFromMapFunc(managedResourceHandlerForType("connector"))
// If a ProxyClassChanges, enqueue all Connectors that have
// .spec.proxyClass set to the name of this ProxyClass.
@@ -636,8 +601,8 @@ func enqueueAllIngressEgressProxySvcsInNS(ns string, cl client.Client, logger *z
// Get all headless Services for proxies configured using Service.
svcProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "svc",
kubetypes.LabelManaged: "true",
LabelParentType: "svc",
}
svcHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, svcHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(svcProxyLabels)); err != nil {
@@ -650,8 +615,8 @@ func enqueueAllIngressEgressProxySvcsInNS(ns string, cl client.Client, logger *z
// Get all headless Services for proxies configured using Ingress.
ingProxyLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "ingress",
kubetypes.LabelManaged: "true",
LabelParentType: "ingress",
}
ingHeadlessSvcList := &corev1.ServiceList{}
if err := cl.List(ctx, ingHeadlessSvcList, client.InNamespace(ns), client.MatchingLabels(ingProxyLabels)); err != nil {
@@ -718,7 +683,7 @@ func dnsRecordsReconcilerIngressHandler(ns string, isDefaultLoadBalancer bool, c
func isManagedResource(o client.Object) bool {
ls := o.GetLabels()
return ls[LabelManaged] == "true"
return ls[kubetypes.LabelManaged] == "true"
}
func isManagedByType(o client.Object, typ string) bool {
@@ -955,7 +920,7 @@ func egressPodsHandler(_ context.Context, o client.Object) []reconcile.Request {
// returns reconciler requests for all egress EndpointSlices for that ProxyGroup.
func egressEpsFromPGPods(cl client.Client, ns string) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
if v, ok := o.GetLabels()[LabelManaged]; !ok || v != "true" {
if v, ok := o.GetLabels()[kubetypes.LabelManaged]; !ok || v != "true" {
return nil
}
// TODO(irbekrm): for now this is good enough as all ProxyGroups are egress. Add a type check once we
@@ -975,15 +940,13 @@ func egressEpsFromPGPods(cl client.Client, ns string) handler.MapFunc {
// returns reconciler requests for all egress EndpointSlices for that ProxyGroup.
func egressEpsFromPGStateSecrets(cl client.Client, ns string) handler.MapFunc {
return func(_ context.Context, o client.Object) []reconcile.Request {
if v, ok := o.GetLabels()[LabelManaged]; !ok || v != "true" {
if v, ok := o.GetLabels()[kubetypes.LabelManaged]; !ok || v != "true" {
return nil
}
// TODO(irbekrm): for now this is good enough as all ProxyGroups are egress. Add a type check once we
// have ingress ProxyGroups.
if parentType := o.GetLabels()[LabelParentType]; parentType != "proxygroup" {
return nil
}
if secretType := o.GetLabels()[labelSecretType]; secretType != "state" {
if secretType := o.GetLabels()[kubetypes.LabelSecretType]; secretType != "state" {
return nil
}
pg, ok := o.GetLabels()[LabelParentName]
@@ -1000,7 +963,7 @@ func egressSvcFromEps(_ context.Context, o client.Object) []reconcile.Request {
if typ := o.GetLabels()[labelSvcType]; typ != typeEgress {
return nil
}
if v, ok := o.GetLabels()[LabelManaged]; !ok || v != "true" {
if v, ok := o.GetLabels()[kubetypes.LabelManaged]; !ok || v != "true" {
return nil
}
svcName, ok := o.GetLabels()[LabelParentName]
@@ -1070,36 +1033,6 @@ func egressSvcsFromEgressProxyGroup(cl client.Client, logger *zap.SugaredLogger)
}
}
// ingressesFromIngressProxyGroup is an event handler for ingress ProxyGroups. It returns reconcile requests for all
// user-created Ingresses that should be exposed on this ProxyGroup.
func ingressesFromIngressProxyGroup(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
pg, ok := o.(*tsapi.ProxyGroup)
if !ok {
logger.Infof("[unexpected] ProxyGroup handler triggered for an object that is not a ProxyGroup")
return nil
}
if pg.Spec.Type != tsapi.ProxyGroupTypeIngress {
return nil
}
ingList := &networkingv1.IngressList{}
if err := cl.List(ctx, ingList, client.MatchingFields{indexIngressProxyGroup: pg.Name}); err != nil {
logger.Infof("error listing Ingresses: %v, skipping a reconcile for event on ProxyGroup %s", err, pg.Name)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, svc := range ingList.Items {
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: svc.Namespace,
Name: svc.Name,
},
})
}
return reqs
}
}
// epsFromExternalNameService is an event handler for ExternalName Services that define a Tailscale egress service that
// should be exposed on a ProxyGroup. It returns reconcile requests for EndpointSlices created for this Service.
func epsFromExternalNameService(cl client.Client, logger *zap.SugaredLogger, ns string) handler.MapFunc {
@@ -1145,9 +1078,9 @@ func podsFromEgressEps(cl client.Client, logger *zap.SugaredLogger, ns string) h
return nil
}
podLabels := map[string]string{
LabelManaged: "true",
LabelParentType: "proxygroup",
LabelParentName: eps.Labels[labelProxyGroup],
kubetypes.LabelManaged: "true",
LabelParentType: "proxygroup",
LabelParentName: eps.Labels[labelProxyGroup],
}
podList := &corev1.PodList{}
if err := cl.List(ctx, podList, client.InNamespace(ns),
@@ -1220,63 +1153,7 @@ func indexEgressServices(o client.Object) []string {
return []string{o.GetAnnotations()[AnnotationProxyGroup]}
}
// indexPGIngresses adds a local index to a cached Tailscale Ingresses meant to be exposed on a ProxyGroup. The index is
// used a list filter.
func indexPGIngresses(o client.Object) []string {
if !hasProxyGroupAnnotation(o) {
return nil
}
return []string{o.GetAnnotations()[AnnotationProxyGroup]}
}
// serviceHandlerForIngressPG returns a handler for Service events that ensures that if the Service
// associated with an event is a backend Service for a tailscale Ingress with ProxyGroup annotation,
// the associated Ingress gets reconciled.
func serviceHandlerForIngressPG(cl client.Client, logger *zap.SugaredLogger) handler.MapFunc {
return func(ctx context.Context, o client.Object) []reconcile.Request {
ingList := networkingv1.IngressList{}
if err := cl.List(ctx, &ingList, client.InNamespace(o.GetNamespace())); err != nil {
logger.Debugf("error listing Ingresses: %v", err)
return nil
}
reqs := make([]reconcile.Request, 0)
for _, ing := range ingList.Items {
if ing.Spec.IngressClassName == nil || *ing.Spec.IngressClassName != tailscaleIngressClassName {
continue
}
if !hasProxyGroupAnnotation(&ing) {
continue
}
if ing.Spec.DefaultBackend != nil && ing.Spec.DefaultBackend.Service != nil && ing.Spec.DefaultBackend.Service.Name == o.GetName() {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&ing)})
}
for _, rule := range ing.Spec.Rules {
if rule.HTTP == nil {
continue
}
for _, path := range rule.HTTP.Paths {
if path.Backend.Service != nil && path.Backend.Service.Name == o.GetName() {
reqs = append(reqs, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(&ing)})
}
}
}
}
return reqs
}
}
func hasProxyGroupAnnotation(obj client.Object) bool {
ing := obj.(*networkingv1.Ingress)
return ing.Annotations[AnnotationProxyGroup] != ""
}
func id(ctx context.Context, lc *local.Client) (string, error) {
st, err := lc.StatusWithoutPeers(ctx)
if err != nil {
return "", fmt.Errorf("error getting tailscale status: %w", err)
}
if st.Self == nil {
return "", fmt.Errorf("unexpected: device's status does not contain node's metadata")
}
return string(st.Self.ID), nil
}

View File

@@ -1387,10 +1387,10 @@ func Test_serviceHandlerForIngress(t *testing.T) {
Name: "headless-1",
Namespace: "tailscale",
Labels: map[string]string{
LabelManaged: "true",
LabelParentName: "ing-1",
LabelParentNamespace: "ns-1",
LabelParentType: "ingress",
kubetypes.LabelManaged: "true",
LabelParentName: "ing-1",
LabelParentNamespace: "ns-1",
LabelParentType: "ingress",
},
},
}

View File

@@ -302,7 +302,10 @@ func (r *ProxyGroupReconciler) maybeProvision(ctx context.Context, pg *tsapi.Pro
if err != nil {
return fmt.Errorf("error generating StatefulSet spec: %w", err)
}
ss = applyProxyClassToStatefulSet(proxyClass, ss, nil, logger)
cfg := &tailscaleSTSConfig{
proxyType: string(pg.Spec.Type),
}
ss = applyProxyClassToStatefulSet(proxyClass, ss, cfg, logger)
capver, err := r.capVerForPG(ctx, pg, logger)
if err != nil {
return fmt.Errorf("error getting device info: %w", err)
@@ -461,7 +464,7 @@ func (r *ProxyGroupReconciler) ensureConfigSecretsCreated(ctx context.Context, p
var existingCfgSecret *corev1.Secret // unmodified copy of secret
if err := r.Get(ctx, client.ObjectKeyFromObject(cfgSecret), cfgSecret); err == nil {
logger.Debugf("secret %s/%s already exists", cfgSecret.GetNamespace(), cfgSecret.GetName())
logger.Debugf("Secret %s/%s already exists", cfgSecret.GetNamespace(), cfgSecret.GetName())
existingCfgSecret = cfgSecret.DeepCopy()
} else if !apierrors.IsNotFound(err) {
return "", err
@@ -469,7 +472,7 @@ func (r *ProxyGroupReconciler) ensureConfigSecretsCreated(ctx context.Context, p
var authKey string
if existingCfgSecret == nil {
logger.Debugf("creating authkey for new ProxyGroup proxy")
logger.Debugf("Creating authkey for new ProxyGroup proxy")
tags := pg.Spec.Tags.Stringify()
if len(tags) == 0 {
tags = r.defaultTags
@@ -490,7 +493,7 @@ func (r *ProxyGroupReconciler) ensureConfigSecretsCreated(ctx context.Context, p
if err != nil {
return "", fmt.Errorf("error marshalling tailscaled config: %w", err)
}
mak.Set(&cfgSecret.StringData, tsoperator.TailscaledConfigFileName(cap), string(cfgJSON))
mak.Set(&cfgSecret.Data, tsoperator.TailscaledConfigFileName(cap), cfgJSON)
}
// The config sha256 sum is a value for a hash annotation used to trigger
@@ -520,12 +523,14 @@ func (r *ProxyGroupReconciler) ensureConfigSecretsCreated(ctx context.Context, p
}
if existingCfgSecret != nil {
logger.Debugf("patching the existing ProxyGroup config Secret %s", cfgSecret.Name)
if err := r.Patch(ctx, cfgSecret, client.MergeFrom(existingCfgSecret)); err != nil {
return "", err
if !apiequality.Semantic.DeepEqual(existingCfgSecret, cfgSecret) {
logger.Debugf("Updating the existing ProxyGroup config Secret %s", cfgSecret.Name)
if err := r.Update(ctx, cfgSecret); err != nil {
return "", err
}
}
} else {
logger.Debugf("creating a new config Secret %s for the ProxyGroup", cfgSecret.Name)
logger.Debugf("Creating a new config Secret %s for the ProxyGroup", cfgSecret.Name)
if err := r.Create(ctx, cfgSecret); err != nil {
return "", err
}
@@ -645,7 +650,7 @@ func (r *ProxyGroupReconciler) getNodeMetadata(ctx context.Context, pg *tsapi.Pr
return nil, fmt.Errorf("unexpected secret %s was labelled as owned by the ProxyGroup %s: %w", secret.Name, pg.Name, err)
}
id, dnsName, ok, err := getNodeMetadata(ctx, &secret)
prefs, ok, err := getDevicePrefs(&secret)
if err != nil {
return nil, err
}
@@ -656,8 +661,8 @@ func (r *ProxyGroupReconciler) getNodeMetadata(ctx context.Context, pg *tsapi.Pr
nm := nodeMetadata{
ordinal: ordinal,
stateSecret: &secret,
tsID: id,
dnsName: dnsName,
tsID: prefs.Config.NodeID,
dnsName: prefs.Config.UserProfile.LoginName,
}
pod := &corev1.Pod{}
if err := r.Get(ctx, client.ObjectKey{Namespace: r.tsNamespace, Name: secret.Name}, pod); err != nil && !apierrors.IsNotFound(err) {

View File

@@ -178,7 +178,15 @@ func pgStatefulSet(pg *tsapi.ProxyGroup, namespace, image, tsFirewallMode string
corev1.EnvVar{
Name: "TS_SERVE_CONFIG",
Value: fmt.Sprintf("/etc/proxies/%s", serveConfigKey),
})
},
corev1.EnvVar{
// Run proxies in cert share mode to
// ensure that only one TLS cert is
// issued for an HA Ingress.
Name: "TS_EXPERIMENTAL_CERT_SHARE",
Value: "true",
},
)
}
return append(c.Env, envs...)
}()
@@ -225,6 +233,13 @@ func pgRole(pg *tsapi.ProxyGroup, namespace string) *rbacv1.Role {
OwnerReferences: pgOwnerReference(pg),
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{
"list",
},
},
{
APIGroups: []string{""},
Resources: []string{"secrets"},
@@ -318,9 +333,9 @@ func pgIngressCM(pg *tsapi.ProxyGroup, namespace string) *corev1.ConfigMap {
}
}
func pgSecretLabels(pgName, typ string) map[string]string {
func pgSecretLabels(pgName, secretType string) map[string]string {
return pgLabels(pgName, map[string]string{
labelSecretType: typ, // "config" or "state".
kubetypes.LabelSecretType: secretType, // "config" or "state".
})
}
@@ -330,7 +345,7 @@ func pgLabels(pgName string, customLabels map[string]string) map[string]string {
l[k] = v
}
l[LabelManaged] = "true"
l[kubetypes.LabelManaged] = "true"
l[LabelParentType] = "proxygroup"
l[LabelParentName] = pgName

View File

@@ -247,7 +247,6 @@ func TestProxyGroup(t *testing.T) {
// The fake client does not clean up objects whose owner has been
// deleted, so we can't test for the owned resources getting deleted.
})
}
func TestProxyGroupTypes(t *testing.T) {
@@ -417,6 +416,7 @@ func TestProxyGroupTypes(t *testing.T) {
}
verifyEnvVar(t, sts, "TS_INTERNAL_APP", kubetypes.AppProxyGroupIngress)
verifyEnvVar(t, sts, "TS_SERVE_CONFIG", "/etc/proxies/serve-config.json")
verifyEnvVar(t, sts, "TS_EXPERIMENTAL_CERT_SHARE", "true")
// Verify ConfigMap volume mount
cmName := fmt.Sprintf("%s-ingress-config", pg.Name)
@@ -475,8 +475,6 @@ func TestIngressAdvertiseServicesConfigPreserved(t *testing.T) {
Name: pgConfigSecretName(pgName, 0),
Namespace: tsNamespace,
},
// Write directly to Data because the fake client doesn't copy the write-only
// StringData field across to Data for us.
Data: map[string][]byte{
tsoperator.TailscaledConfigFileName(106): existingConfigBytes,
},
@@ -514,10 +512,64 @@ func TestIngressAdvertiseServicesConfigPreserved(t *testing.T) {
Namespace: tsNamespace,
ResourceVersion: "2",
},
StringData: map[string]string{
tsoperator.TailscaledConfigFileName(106): string(expectedConfigBytes),
Data: map[string][]byte{
tsoperator.TailscaledConfigFileName(106): expectedConfigBytes,
},
}, omitSecretData)
})
}
func proxyClassesForLEStagingTest() (*tsapi.ProxyClass, *tsapi.ProxyClass, *tsapi.ProxyClass) {
pcLEStaging := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{
Name: "le-staging",
Generation: 1,
},
Spec: tsapi.ProxyClassSpec{
UseLetsEncryptStagingEnvironment: true,
},
}
pcLEStagingFalse := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{
Name: "le-staging-false",
Generation: 1,
},
Spec: tsapi.ProxyClassSpec{
UseLetsEncryptStagingEnvironment: false,
},
}
pcOther := &tsapi.ProxyClass{
ObjectMeta: metav1.ObjectMeta{
Name: "other",
Generation: 1,
},
Spec: tsapi.ProxyClassSpec{},
}
return pcLEStaging, pcLEStagingFalse, pcOther
}
func setProxyClassReady(t *testing.T, fc client.Client, cl *tstest.Clock, name string) *tsapi.ProxyClass {
t.Helper()
pc := &tsapi.ProxyClass{}
if err := fc.Get(context.Background(), client.ObjectKey{Name: name}, pc); err != nil {
t.Fatal(err)
}
pc.Status = tsapi.ProxyClassStatus{
Conditions: []metav1.Condition{{
Type: string(tsapi.ProxyClassReady),
Status: metav1.ConditionTrue,
Reason: reasonProxyClassValid,
Message: reasonProxyClassValid,
LastTransitionTime: metav1.Time{Time: cl.Now().Truncate(time.Second)},
ObservedGeneration: pc.Generation,
}},
}
if err := fc.Status().Update(context.Background(), pc); err != nil {
t.Fatal(err)
}
return pc
}
func verifyProxyGroupCounts(t *testing.T, r *ProxyGroupReconciler, wantIngress, wantEgress int) {
@@ -543,6 +595,16 @@ func verifyEnvVar(t *testing.T, sts *appsv1.StatefulSet, name, expectedValue str
t.Errorf("%s environment variable not found", name)
}
func verifyEnvVarNotPresent(t *testing.T, sts *appsv1.StatefulSet, name string) {
t.Helper()
for _, env := range sts.Spec.Template.Spec.Containers[0].Env {
if env.Name == name {
t.Errorf("environment variable %s should not be present", name)
return
}
}
}
func expectProxyGroupResources(t *testing.T, fc client.WithWatch, pg *tsapi.ProxyGroup, shouldExist bool, cfgHash string, proxyClass *tsapi.ProxyClass) {
t.Helper()
@@ -621,10 +683,145 @@ func addNodeIDToStateSecrets(t *testing.T, fc client.WithWatch, pg *tsapi.ProxyG
}
}
// The operator mostly writes to StringData and reads from Data, but the fake
// client doesn't copy StringData across to Data on write. When comparing actual
// vs expected Secrets, use this function to only check what the operator writes
// to StringData.
func omitSecretData(secret *corev1.Secret) {
secret.Data = nil
func TestProxyGroupLetsEncryptStaging(t *testing.T) {
cl := tstest.NewClock(tstest.ClockOpts{})
zl := zap.Must(zap.NewDevelopment())
// Set up test cases- most are shared with non-HA Ingress.
type proxyGroupLETestCase struct {
leStagingTestCase
pgType tsapi.ProxyGroupType
}
pcLEStaging, pcLEStagingFalse, pcOther := proxyClassesForLEStagingTest()
sharedTestCases := testCasesForLEStagingTests(pcLEStaging, pcLEStagingFalse, pcOther)
var tests []proxyGroupLETestCase
for _, tt := range sharedTestCases {
tests = append(tests, proxyGroupLETestCase{
leStagingTestCase: tt,
pgType: tsapi.ProxyGroupTypeIngress,
})
}
tests = append(tests, proxyGroupLETestCase{
leStagingTestCase: leStagingTestCase{
name: "egress_pg_with_staging_proxyclass",
proxyClassPerResource: "le-staging",
useLEStagingEndpoint: false,
},
pgType: tsapi.ProxyGroupTypeEgress,
})
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
builder := fake.NewClientBuilder().
WithScheme(tsapi.GlobalScheme)
// Pre-populate the fake client with ProxyClasses.
builder = builder.WithObjects(pcLEStaging, pcLEStagingFalse, pcOther).
WithStatusSubresource(pcLEStaging, pcLEStagingFalse, pcOther)
fc := builder.Build()
// If the test case needs a ProxyClass to exist, ensure it is set to Ready.
if tt.proxyClassPerResource != "" || tt.defaultProxyClass != "" {
name := tt.proxyClassPerResource
if name == "" {
name = tt.defaultProxyClass
}
setProxyClassReady(t, fc, cl, name)
}
// Create ProxyGroup
pg := &tsapi.ProxyGroup{
ObjectMeta: metav1.ObjectMeta{
Name: "test",
},
Spec: tsapi.ProxyGroupSpec{
Type: tt.pgType,
Replicas: ptr.To[int32](1),
ProxyClass: tt.proxyClassPerResource,
},
}
mustCreate(t, fc, pg)
reconciler := &ProxyGroupReconciler{
tsNamespace: tsNamespace,
proxyImage: testProxyImage,
defaultTags: []string{"tag:test"},
defaultProxyClass: tt.defaultProxyClass,
Client: fc,
tsClient: &fakeTSClient{},
l: zl.Sugar(),
clock: cl,
}
expectReconciled(t, reconciler, "", pg.Name)
// Verify that the StatefulSet created for ProxyGrup has
// the expected setting for the staging endpoint.
sts := &appsv1.StatefulSet{}
if err := fc.Get(context.Background(), client.ObjectKey{Namespace: tsNamespace, Name: pg.Name}, sts); err != nil {
t.Fatalf("failed to get StatefulSet: %v", err)
}
if tt.useLEStagingEndpoint {
verifyEnvVar(t, sts, "TS_DEBUG_ACME_DIRECTORY_URL", letsEncryptStagingEndpoint)
} else {
verifyEnvVarNotPresent(t, sts, "TS_DEBUG_ACME_DIRECTORY_URL")
}
})
}
}
type leStagingTestCase struct {
name string
// ProxyClass set on ProxyGroup or Ingress resource.
proxyClassPerResource string
// Default ProxyClass.
defaultProxyClass string
useLEStagingEndpoint bool
}
// Shared test cases for LE staging endpoint configuration for ProxyGroup and
// non-HA Ingress.
func testCasesForLEStagingTests(pcLEStaging, pcLEStagingFalse, pcOther *tsapi.ProxyClass) []leStagingTestCase {
return []leStagingTestCase{
{
name: "with_staging_proxyclass",
proxyClassPerResource: "le-staging",
useLEStagingEndpoint: true,
},
{
name: "with_staging_proxyclass_false",
proxyClassPerResource: "le-staging-false",
useLEStagingEndpoint: false,
},
{
name: "with_other_proxyclass",
proxyClassPerResource: "other",
useLEStagingEndpoint: false,
},
{
name: "no_proxyclass",
proxyClassPerResource: "",
useLEStagingEndpoint: false,
},
{
name: "with_default_staging_proxyclass",
proxyClassPerResource: "",
defaultProxyClass: "le-staging",
useLEStagingEndpoint: true,
},
{
name: "with_default_other_proxyclass",
proxyClassPerResource: "",
defaultProxyClass: "other",
useLEStagingEndpoint: false,
},
{
name: "with_default_staging_proxyclass_false",
proxyClassPerResource: "",
defaultProxyClass: "le-staging-false",
useLEStagingEndpoint: false,
},
}
}

View File

@@ -44,11 +44,9 @@ const (
// Labels that the operator sets on StatefulSets and Pods. If you add a
// new label here, do also add it to tailscaleManagedLabels var to
// ensure that it does not get overwritten by ProxyClass configuration.
LabelManaged = "tailscale.com/managed"
LabelParentType = "tailscale.com/parent-resource-type"
LabelParentName = "tailscale.com/parent-resource"
LabelParentNamespace = "tailscale.com/parent-resource-ns"
labelSecretType = "tailscale.com/secret-type" // "config" or "state".
// LabelProxyClass can be set by users on tailscale Ingresses and Services that define cluster ingress or
// cluster egress, to specify that configuration in this ProxyClass should be applied to resources created for
@@ -104,11 +102,13 @@ const (
envVarTSLocalAddrPort = "TS_LOCAL_ADDR_PORT"
defaultLocalAddrPort = 9002 // metrics and health check port
letsEncryptStagingEndpoint = "https://acme-staging-v02.api.letsencrypt.org/directory"
)
var (
// tailscaleManagedLabels are label keys that tailscale operator sets on StatefulSets and Pods.
tailscaleManagedLabels = []string{LabelManaged, LabelParentType, LabelParentName, LabelParentNamespace, "app"}
tailscaleManagedLabels = []string{kubetypes.LabelManaged, LabelParentType, LabelParentName, LabelParentNamespace, "app"}
// tailscaleManagedAnnotations are annotation keys that tailscale operator sets on StatefulSets and Pods.
tailscaleManagedAnnotations = []string{podAnnotationLastSetClusterIP, podAnnotationLastSetTailnetTargetIP, podAnnotationLastSetTailnetTargetFQDN, podAnnotationLastSetConfigFileHash}
)
@@ -785,6 +785,17 @@ func applyProxyClassToStatefulSet(pc *tsapi.ProxyClass, ss *appsv1.StatefulSet,
enableEndpoints(ss, metricsEnabled, debugEnabled)
}
}
if pc.Spec.UseLetsEncryptStagingEnvironment && (stsCfg.proxyType == proxyTypeIngressResource || stsCfg.proxyType == string(tsapi.ProxyGroupTypeIngress)) {
for i, c := range ss.Spec.Template.Spec.Containers {
if c.Name == "tailscale" {
ss.Spec.Template.Spec.Containers[i].Env = append(ss.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{
Name: "TS_DEBUG_ACME_DIRECTORY_URL",
Value: letsEncryptStagingEndpoint,
})
break
}
}
}
if pc.Spec.StatefulSet == nil {
return ss

View File

@@ -21,6 +21,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/yaml"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
"tailscale.com/types/ptr"
)
@@ -156,8 +157,8 @@ func Test_applyProxyClassToStatefulSet(t *testing.T) {
// Set a couple additional fields so we can test that we don't
// mistakenly override those.
labels := map[string]string{
LabelManaged: "true",
LabelParentName: "foo",
kubetypes.LabelManaged: "true",
LabelParentName: "foo",
}
annots := map[string]string{
podAnnotationLastSetClusterIP: "1.2.3.4",
@@ -303,28 +304,28 @@ func Test_mergeStatefulSetLabelsOrAnnots(t *testing.T) {
}{
{
name: "no custom labels specified and none present in current labels, return current labels",
current: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
current: map[string]string{kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "no custom labels specified, but some present in current labels, return tailscale managed labels only from the current labels",
current: map[string]string{"foo": "bar", "something.io/foo": "bar", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
current: map[string]string{"foo": "bar", "something.io/foo": "bar", kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "custom labels specified, current labels only contain tailscale managed labels, return a union of both",
current: map[string]string{LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
current: map[string]string{kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{
name: "custom labels specified, current labels contain tailscale managed labels and custom labels, some of which re not present in the new custom labels, return a union of managed labels and the desired custom labels",
current: map[string]string{"foo": "bar", "bar": "baz", "app": "1234", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
current: map[string]string{"foo": "bar", "bar": "baz", "app": "1234", kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
custom: map[string]string{"foo": "bar", "something.io/foo": "bar"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", "app": "1234", LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
want: map[string]string{"foo": "bar", "something.io/foo": "bar", "app": "1234", kubetypes.LabelManaged: "true", LabelParentName: "foo", LabelParentType: "svc", LabelParentNamespace: "foo"},
managed: tailscaleManagedLabels,
},
{

View File

@@ -84,10 +84,10 @@ func childResourceLabels(name, ns, typ string) map[string]string {
// proxying. Instead, we have to do our own filtering and tracking with
// labels.
return map[string]string{
LabelManaged: "true",
LabelParentName: name,
LabelParentNamespace: ns,
LabelParentType: typ,
kubetypes.LabelManaged: "true",
LabelParentName: name,
LabelParentNamespace: ns,
LabelParentType: typ,
}
}

View File

@@ -32,6 +32,7 @@ import (
"tailscale.com/ipn"
"tailscale.com/ipn/ipnstate"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/kube/kubetypes"
"tailscale.com/tailcfg"
"tailscale.com/types/ptr"
"tailscale.com/util/mak"
@@ -563,10 +564,10 @@ func expectedSecret(t *testing.T, cl client.Client, opts configOpts) *corev1.Sec
func findGenName(t *testing.T, client client.Client, ns, name, typ string) (full, noSuffix string) {
t.Helper()
labels := map[string]string{
LabelManaged: "true",
LabelParentName: name,
LabelParentNamespace: ns,
LabelParentType: typ,
kubetypes.LabelManaged: "true",
LabelParentName: name,
LabelParentNamespace: ns,
LabelParentType: typ,
}
s, err := getSingleObject[corev1.Secret](context.Background(), client, "operator-ns", labels)
if err != nil {

View File

@@ -230,7 +230,7 @@ func (r *RecorderReconciler) maybeProvision(ctx context.Context, tsr *tsapi.Reco
func (r *RecorderReconciler) maybeCleanup(ctx context.Context, tsr *tsapi.Recorder) (bool, error) {
logger := r.logger(tsr.Name)
id, _, ok, err := r.getNodeMetadata(ctx, tsr.Name)
prefs, ok, err := r.getDevicePrefs(ctx, tsr.Name)
if err != nil {
return false, err
}
@@ -243,6 +243,7 @@ func (r *RecorderReconciler) maybeCleanup(ctx context.Context, tsr *tsapi.Record
return true, nil
}
id := string(prefs.Config.NodeID)
logger.Debugf("deleting device %s from control", string(id))
if err := r.tsClient.DeleteDevice(ctx, string(id)); err != nil {
errResp := &tailscale.ErrResponse{}
@@ -327,34 +328,33 @@ func (r *RecorderReconciler) getStateSecret(ctx context.Context, tsrName string)
return secret, nil
}
func (r *RecorderReconciler) getNodeMetadata(ctx context.Context, tsrName string) (id tailcfg.StableNodeID, dnsName string, ok bool, err error) {
func (r *RecorderReconciler) getDevicePrefs(ctx context.Context, tsrName string) (prefs prefs, ok bool, err error) {
secret, err := r.getStateSecret(ctx, tsrName)
if err != nil || secret == nil {
return "", "", false, err
return prefs, false, err
}
return getNodeMetadata(ctx, secret)
return getDevicePrefs(secret)
}
// getNodeMetadata returns 'ok == true' iff the node ID is found. The dnsName
// getDevicePrefs returns 'ok == true' iff the node ID is found. The dnsName
// is expected to always be non-empty if the node ID is, but not required.
func getNodeMetadata(ctx context.Context, secret *corev1.Secret) (id tailcfg.StableNodeID, dnsName string, ok bool, err error) {
func getDevicePrefs(secret *corev1.Secret) (prefs prefs, ok bool, err error) {
// TODO(tomhjp): Should maybe use ipn to parse the following info instead.
currentProfile, ok := secret.Data[currentProfileKey]
if !ok {
return "", "", false, nil
return prefs, false, nil
}
profileBytes, ok := secret.Data[string(currentProfile)]
if !ok {
return "", "", false, nil
return prefs, false, nil
}
var profile profile
if err := json.Unmarshal(profileBytes, &profile); err != nil {
return "", "", false, fmt.Errorf("failed to extract node profile info from state Secret %s: %w", secret.Name, err)
if err := json.Unmarshal(profileBytes, &prefs); err != nil {
return prefs, false, fmt.Errorf("failed to extract node profile info from state Secret %s: %w", secret.Name, err)
}
ok = profile.Config.NodeID != ""
return tailcfg.StableNodeID(profile.Config.NodeID), profile.Config.UserProfile.LoginName, ok, nil
ok = prefs.Config.NodeID != ""
return prefs, ok, nil
}
func (r *RecorderReconciler) getDeviceInfo(ctx context.Context, tsrName string) (d tsapi.RecorderTailnetDevice, ok bool, err error) {
@@ -367,14 +367,14 @@ func (r *RecorderReconciler) getDeviceInfo(ctx context.Context, tsrName string)
}
func getDeviceInfo(ctx context.Context, tsClient tsClient, secret *corev1.Secret) (d tsapi.RecorderTailnetDevice, ok bool, err error) {
nodeID, dnsName, ok, err := getNodeMetadata(ctx, secret)
prefs, ok, err := getDevicePrefs(secret)
if !ok || err != nil {
return tsapi.RecorderTailnetDevice{}, false, err
}
// TODO(tomhjp): The profile info doesn't include addresses, which is why we
// need the API. Should we instead update the profile to include addresses?
device, err := tsClient.Device(ctx, string(nodeID), nil)
device, err := tsClient.Device(ctx, string(prefs.Config.NodeID), nil)
if err != nil {
return tsapi.RecorderTailnetDevice{}, false, fmt.Errorf("failed to get device info from API: %w", err)
}
@@ -383,20 +383,25 @@ func getDeviceInfo(ctx context.Context, tsClient tsClient, secret *corev1.Secret
Hostname: device.Hostname,
TailnetIPs: device.Addresses,
}
if dnsName != "" {
if dnsName := prefs.Config.UserProfile.LoginName; dnsName != "" {
d.URL = fmt.Sprintf("https://%s", dnsName)
}
return d, true, nil
}
type profile struct {
// [prefs] is a subset of the ipn.Prefs struct used for extracting information
// from the state Secret of Tailscale devices.
type prefs struct {
Config struct {
NodeID string `json:"NodeID"`
NodeID tailcfg.StableNodeID `json:"NodeID"`
UserProfile struct {
// LoginName is the MagicDNS name of the device, e.g. foo.tail-scale.ts.net.
LoginName string `json:"LoginName"`
} `json:"UserProfile"`
} `json:"Config"`
AdvertiseServices []string `json:"AdvertiseServices"`
}
func markedForDeletion(obj metav1.Object) bool {

View File

@@ -94,18 +94,24 @@ func main() {
}
ignoreDstTable.Insert(pfx, true)
}
var v4Prefixes []netip.Prefix
var (
v4Prefixes []netip.Prefix
numV4DNSAddrs int
)
for _, s := range strings.Split(*v4PfxStr, ",") {
p := netip.MustParsePrefix(strings.TrimSpace(s))
if p.Masked() != p {
log.Fatalf("v4 prefix %v is not a masked prefix", p)
}
v4Prefixes = append(v4Prefixes, p)
numIPs := 1 << (32 - p.Bits())
numV4DNSAddrs += numIPs
}
if len(v4Prefixes) == 0 {
log.Fatalf("no v4 prefixes specified")
}
dnsAddr := v4Prefixes[0].Addr()
numV4DNSAddrs -= 1 // Subtract the dnsAddr allocated above.
ts := &tsnet.Server{
Hostname: *hostname,
}
@@ -153,12 +159,13 @@ func main() {
}
c := &connector{
ts: ts,
lc: lc,
dnsAddr: dnsAddr,
v4Ranges: v4Prefixes,
v6ULA: ula(uint16(*siteID)),
ignoreDsts: ignoreDstTable,
ts: ts,
lc: lc,
dnsAddr: dnsAddr,
v4Ranges: v4Prefixes,
numV4DNSAddrs: numV4DNSAddrs,
v6ULA: ula(uint16(*siteID)),
ignoreDsts: ignoreDstTable,
}
c.run(ctx)
}
@@ -177,6 +184,11 @@ type connector struct {
// v4Ranges is the list of IPv4 ranges to advertise and assign addresses from.
// These are masked prefixes.
v4Ranges []netip.Prefix
// numV4DNSAddrs is the total size of the IPv4 ranges in addresses, minus the
// dnsAddr allocation.
numV4DNSAddrs int
// v6ULA is the ULA prefix used by the app connector to assign IPv6 addresses.
v6ULA netip.Prefix
@@ -502,6 +514,7 @@ type perPeerState struct {
mu sync.Mutex
domainToAddr map[string][]netip.Addr
addrToDomain *bart.Table[string]
numV4Allocs int
}
// domainForIP returns the domain name assigned to the given IP address and
@@ -547,17 +560,25 @@ func (ps *perPeerState) isIPUsedLocked(ip netip.Addr) bool {
// unusedIPv4Locked returns an unused IPv4 address from the available ranges.
func (ps *perPeerState) unusedIPv4Locked() netip.Addr {
// All addresses have been allocated.
if ps.numV4Allocs >= ps.c.numV4DNSAddrs {
return netip.Addr{}
}
// TODO: skip ranges that have been exhausted
for _, r := range ps.c.v4Ranges {
ip := randV4(r)
for r.Contains(ip) {
// TODO: implement a much more efficient algorithm for finding unused IPs,
// this is fairly crazy.
for {
for _, r := range ps.c.v4Ranges {
ip := randV4(r)
if !r.Contains(ip) {
panic("error: randV4 returned invalid address")
}
if !ps.isIPUsedLocked(ip) && ip != ps.c.dnsAddr {
return ip
}
ip = ip.Next()
}
}
return netip.Addr{}
}
// randV4 returns a random IPv4 address within the given prefix.
@@ -583,6 +604,7 @@ func (ps *perPeerState) assignAddrsLocked(domain string) []netip.Addr {
if !v4.IsValid() {
return nil
}
ps.numV4Allocs++
as16 := ps.c.v6ULA.Addr().As16()
as4 := v4.As4()
copy(as16[12:], as4[:])

429
cmd/natc/natc_test.go Normal file
View File

@@ -0,0 +1,429 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package main
import (
"errors"
"fmt"
"net/netip"
"slices"
"testing"
"github.com/gaissmai/bart"
"github.com/google/go-cmp/cmp"
"golang.org/x/net/dns/dnsmessage"
"tailscale.com/tailcfg"
)
func prefixEqual(a, b netip.Prefix) bool {
return a.Bits() == b.Bits() && a.Addr() == b.Addr()
}
func TestULA(t *testing.T) {
tests := []struct {
name string
siteID uint16
expected string
}{
{"zero", 0, "fd7a:115c:a1e0:a99c:0000::/80"},
{"one", 1, "fd7a:115c:a1e0:a99c:0001::/80"},
{"max", 65535, "fd7a:115c:a1e0:a99c:ffff::/80"},
{"random", 12345, "fd7a:115c:a1e0:a99c:3039::/80"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := ula(tc.siteID)
expected := netip.MustParsePrefix(tc.expected)
if !prefixEqual(got, expected) {
t.Errorf("ula(%d) = %s; want %s", tc.siteID, got, expected)
}
})
}
}
func TestRandV4(t *testing.T) {
pfx := netip.MustParsePrefix("100.64.1.0/24")
for i := 0; i < 512; i++ {
ip := randV4(pfx)
if !pfx.Contains(ip) {
t.Errorf("randV4(%s) = %s; not contained in prefix", pfx, ip)
}
}
}
func TestDNSResponse(t *testing.T) {
tests := []struct {
name string
questions []dnsmessage.Question
addrs []netip.Addr
wantEmpty bool
wantAnswers []struct {
name string
qType dnsmessage.Type
addr netip.Addr
}
}{
{
name: "empty_request",
questions: []dnsmessage.Question{},
addrs: []netip.Addr{},
wantEmpty: false,
wantAnswers: nil,
},
{
name: "a_record",
questions: []dnsmessage.Question{
{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
},
},
addrs: []netip.Addr{netip.MustParseAddr("100.64.1.5")},
wantAnswers: []struct {
name string
qType dnsmessage.Type
addr netip.Addr
}{
{
name: "example.com.",
qType: dnsmessage.TypeA,
addr: netip.MustParseAddr("100.64.1.5"),
},
},
},
{
name: "aaaa_record",
questions: []dnsmessage.Question{
{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeAAAA,
Class: dnsmessage.ClassINET,
},
},
addrs: []netip.Addr{netip.MustParseAddr("fd7a:115c:a1e0:a99c:0001:0505:0505:0505")},
wantAnswers: []struct {
name string
qType dnsmessage.Type
addr netip.Addr
}{
{
name: "example.com.",
qType: dnsmessage.TypeAAAA,
addr: netip.MustParseAddr("fd7a:115c:a1e0:a99c:0001:0505:0505:0505"),
},
},
},
{
name: "soa_record",
questions: []dnsmessage.Question{
{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeSOA,
Class: dnsmessage.ClassINET,
},
},
addrs: []netip.Addr{},
wantAnswers: nil,
},
{
name: "ns_record",
questions: []dnsmessage.Question{
{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeNS,
Class: dnsmessage.ClassINET,
},
},
addrs: []netip.Addr{},
wantAnswers: nil,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
req := &dnsmessage.Message{
Header: dnsmessage.Header{
ID: 1234,
},
Questions: tc.questions,
}
resp, err := dnsResponse(req, tc.addrs)
if err != nil {
t.Fatalf("dnsResponse() error = %v", err)
}
if tc.wantEmpty && len(resp) != 0 {
t.Errorf("dnsResponse() returned non-empty response when expected empty")
}
if !tc.wantEmpty && len(resp) == 0 {
t.Errorf("dnsResponse() returned empty response when expected non-empty")
}
if len(resp) > 0 {
var msg dnsmessage.Message
err = msg.Unpack(resp)
if err != nil {
t.Fatalf("Failed to unpack response: %v", err)
}
if !msg.Header.Response {
t.Errorf("Response header is not set")
}
if msg.Header.ID != req.Header.ID {
t.Errorf("Response ID = %d, want %d", msg.Header.ID, req.Header.ID)
}
if len(tc.wantAnswers) > 0 {
if len(msg.Answers) != len(tc.wantAnswers) {
t.Errorf("got %d answers, want %d", len(msg.Answers), len(tc.wantAnswers))
} else {
for i, want := range tc.wantAnswers {
ans := msg.Answers[i]
gotName := ans.Header.Name.String()
if gotName != want.name {
t.Errorf("answer[%d] name = %s, want %s", i, gotName, want.name)
}
if ans.Header.Type != want.qType {
t.Errorf("answer[%d] type = %v, want %v", i, ans.Header.Type, want.qType)
}
var gotIP netip.Addr
switch want.qType {
case dnsmessage.TypeA:
if ans.Body.(*dnsmessage.AResource) == nil {
t.Errorf("answer[%d] not an A record", i)
continue
}
resource := ans.Body.(*dnsmessage.AResource)
gotIP = netip.AddrFrom4([4]byte(resource.A))
case dnsmessage.TypeAAAA:
if ans.Body.(*dnsmessage.AAAAResource) == nil {
t.Errorf("answer[%d] not an AAAA record", i)
continue
}
resource := ans.Body.(*dnsmessage.AAAAResource)
gotIP = netip.AddrFrom16([16]byte(resource.AAAA))
}
if gotIP != want.addr {
t.Errorf("answer[%d] IP = %s, want %s", i, gotIP, want.addr)
}
}
}
}
}
})
}
}
func TestPerPeerState(t *testing.T) {
c := &connector{
v4Ranges: []netip.Prefix{netip.MustParsePrefix("100.64.1.0/24")},
v6ULA: netip.MustParsePrefix("fd7a:115c:a1e0:a99c:0001::/80"),
dnsAddr: netip.MustParseAddr("100.64.1.0"),
numV4DNSAddrs: (1<<(32-24) - 1),
}
ps := &perPeerState{c: c}
addrs, err := ps.ipForDomain("example.com")
if err != nil {
t.Fatalf("ipForDomain() error = %v", err)
}
if len(addrs) != 2 {
t.Fatalf("ipForDomain() returned %d addresses, want 2", len(addrs))
}
v4 := addrs[0]
v6 := addrs[1]
if !v4.Is4() {
t.Errorf("First address is not IPv4: %s", v4)
}
if !v6.Is6() {
t.Errorf("Second address is not IPv6: %s", v6)
}
if !c.v4Ranges[0].Contains(v4) {
t.Errorf("IPv4 address %s not in range %s", v4, c.v4Ranges[0])
}
domain, ok := ps.domainForIP(v4)
if !ok {
t.Errorf("domainForIP(%s) not found", v4)
} else if domain != "example.com" {
t.Errorf("domainForIP(%s) = %s, want %s", v4, domain, "example.com")
}
domain, ok = ps.domainForIP(v6)
if !ok {
t.Errorf("domainForIP(%s) not found", v6)
} else if domain != "example.com" {
t.Errorf("domainForIP(%s) = %s, want %s", v6, domain, "example.com")
}
addrs2, err := ps.ipForDomain("example.com")
if err != nil {
t.Fatalf("ipForDomain() second call error = %v", err)
}
if !slices.Equal(addrs, addrs2) {
t.Errorf("ipForDomain() second call = %v, want %v", addrs2, addrs)
}
}
func TestIgnoreDestination(t *testing.T) {
ignoreDstTable := &bart.Table[bool]{}
ignoreDstTable.Insert(netip.MustParsePrefix("192.168.1.0/24"), true)
ignoreDstTable.Insert(netip.MustParsePrefix("10.0.0.0/8"), true)
c := &connector{
ignoreDsts: ignoreDstTable,
}
tests := []struct {
name string
addrs []netip.Addr
expected bool
}{
{
name: "no_match",
addrs: []netip.Addr{netip.MustParseAddr("8.8.8.8"), netip.MustParseAddr("1.1.1.1")},
expected: false,
},
{
name: "one_match",
addrs: []netip.Addr{netip.MustParseAddr("8.8.8.8"), netip.MustParseAddr("192.168.1.5")},
expected: true,
},
{
name: "all_match",
addrs: []netip.Addr{netip.MustParseAddr("10.0.0.1"), netip.MustParseAddr("192.168.1.5")},
expected: true,
},
{
name: "empty_addrs",
addrs: []netip.Addr{},
expected: false,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := c.ignoreDestination(tc.addrs)
if got != tc.expected {
t.Errorf("ignoreDestination(%v) = %v, want %v", tc.addrs, got, tc.expected)
}
})
}
}
func TestConnectorGenerateDNSResponse(t *testing.T) {
c := &connector{
v4Ranges: []netip.Prefix{netip.MustParsePrefix("100.64.1.0/24")},
v6ULA: netip.MustParsePrefix("fd7a:115c:a1e0:a99c:0001::/80"),
dnsAddr: netip.MustParseAddr("100.64.1.0"),
numV4DNSAddrs: (1<<(32-24) - 1),
}
req := &dnsmessage.Message{
Header: dnsmessage.Header{ID: 1234},
Questions: []dnsmessage.Question{
{
Name: dnsmessage.MustNewName("example.com."),
Type: dnsmessage.TypeA,
Class: dnsmessage.ClassINET,
},
},
}
nodeID := tailcfg.NodeID(12345)
resp1, err := c.generateDNSResponse(req, nodeID)
if err != nil {
t.Fatalf("generateDNSResponse() error = %v", err)
}
if len(resp1) == 0 {
t.Fatalf("generateDNSResponse() returned empty response")
}
resp2, err := c.generateDNSResponse(req, nodeID)
if err != nil {
t.Fatalf("generateDNSResponse() second call error = %v", err)
}
if !cmp.Equal(resp1, resp2) {
t.Errorf("generateDNSResponse() responses differ between calls")
}
}
func TestIPPoolExhaustion(t *testing.T) {
smallPrefix := netip.MustParsePrefix("100.64.1.0/30") // Only 4 IPs: .0, .1, .2, .3
c := &connector{
v6ULA: netip.MustParsePrefix("fd7a:115c:a1e0:a99c:0001::/80"),
v4Ranges: []netip.Prefix{smallPrefix},
dnsAddr: netip.MustParseAddr("100.64.1.0"),
numV4DNSAddrs: 3,
}
ps := &perPeerState{c: c}
assignedIPs := make(map[netip.Addr]string)
domains := []string{"a.example.com", "b.example.com", "c.example.com", "d.example.com"}
var errs []error
for i := 0; i < 5; i++ {
for _, domain := range domains {
addrs, err := ps.ipForDomain(domain)
if err != nil {
errs = append(errs, fmt.Errorf("failed to get IP for domain %q: %w", domain, err))
continue
}
for _, addr := range addrs {
if d, ok := assignedIPs[addr]; ok {
if d != domain {
t.Errorf("IP %s reused for domain %q, previously assigned to %q", addr, domain, d)
}
} else {
assignedIPs[addr] = domain
}
}
}
}
for addr, domain := range assignedIPs {
if addr.Is4() && !smallPrefix.Contains(addr) {
t.Errorf("IP %s for domain %q not in expected range %s", addr, domain, smallPrefix)
}
if addr.Is6() && !c.v6ULA.Contains(addr) {
t.Errorf("IP %s for domain %q not in expected range %s", addr, domain, c.v6ULA)
}
if addr == c.dnsAddr {
t.Errorf("IP %s for domain %q is the reserved DNS address", addr, domain)
}
}
// expect one error for each iteration with the 4th domain
if len(errs) != 5 {
t.Errorf("Expected 5 errors, got %d: %v", len(errs), errs)
}
for _, err := range errs {
if !errors.Is(err, ErrNoIPsAvailable) {
t.Errorf("generateDNSResponse() error = %v, want ErrNoIPsAvailable", err)
}
}
}

View File

@@ -19,8 +19,25 @@
// header_property = username
// auto_sign_up = true
// whitelist = 127.0.0.1
// headers = Name:X-WEBAUTH-NAME
// headers = Email:X-Webauth-User, Name:X-Webauth-Name, Role:X-Webauth-Role
// enable_login_token = true
//
// You can use grants in Tailscale ACL to give users different roles in Grafana.
// For example, to give group:eng the Editor role, add the following to your ACLs:
//
// "grants": [
// {
// "src": ["group:eng"],
// "dst": ["tag:grafana"],
// "app": {
// "tailscale.com/cap/proxy-to-grafana": [{
// "role": "editor",
// }],
// },
// },
// ],
//
// If multiple roles are specified, the most permissive role is used.
package main
import (
@@ -49,6 +66,57 @@ var (
loginServer = flag.String("login-server", "", "URL to alternative control server. If empty, the default Tailscale control is used.")
)
// aclCap is the Tailscale ACL capability used to configure proxy-to-grafana.
const aclCap tailcfg.PeerCapability = "tailscale.com/cap/proxy-to-grafana"
// aclGrant is an access control rule that assigns Grafana permissions
// while provisioning a user.
type aclGrant struct {
// Role is one of: "viewer", "editor", "admin".
Role string `json:"role"`
}
// grafanaRole defines possible Grafana roles.
type grafanaRole int
const (
// Roles are ordered by their permissions, with the least permissive role first.
// If a user has multiple roles, the most permissive role is used.
ViewerRole grafanaRole = iota
EditorRole
AdminRole
)
// String returns the string representation of a grafanaRole.
// It is used as a header value in the HTTP request to Grafana.
func (r grafanaRole) String() string {
switch r {
case ViewerRole:
return "Viewer"
case EditorRole:
return "Editor"
case AdminRole:
return "Admin"
default:
// A safe default.
return "Viewer"
}
}
// roleFromString converts a string to a grafanaRole.
// It is used to parse the role from the ACL grant.
func roleFromString(s string) (grafanaRole, error) {
switch strings.ToLower(s) {
case "viewer":
return ViewerRole, nil
case "editor":
return EditorRole, nil
case "admin":
return AdminRole, nil
}
return ViewerRole, fmt.Errorf("unknown role: %q", s)
}
func main() {
flag.Parse()
if *hostname == "" || strings.Contains(*hostname, ".") {
@@ -134,7 +202,15 @@ func modifyRequest(req *http.Request, localClient *local.Client) {
return
}
user, err := getTailscaleUser(req.Context(), localClient, req.RemoteAddr)
// Delete any existing X-Webauth-* headers to prevent possible spoofing
// if getting Tailnet identity fails.
for h := range req.Header {
if strings.HasPrefix(h, "X-Webauth-") {
req.Header.Del(h)
}
}
user, role, err := getTailscaleIdentity(req.Context(), localClient, req.RemoteAddr)
if err != nil {
log.Printf("error getting Tailscale user: %v", err)
return
@@ -142,19 +218,33 @@ func modifyRequest(req *http.Request, localClient *local.Client) {
req.Header.Set("X-Webauth-User", user.LoginName)
req.Header.Set("X-Webauth-Name", user.DisplayName)
req.Header.Set("X-Webauth-Role", role.String())
}
func getTailscaleUser(ctx context.Context, localClient *local.Client, ipPort string) (*tailcfg.UserProfile, error) {
func getTailscaleIdentity(ctx context.Context, localClient *local.Client, ipPort string) (*tailcfg.UserProfile, grafanaRole, error) {
whois, err := localClient.WhoIs(ctx, ipPort)
if err != nil {
return nil, fmt.Errorf("failed to identify remote host: %w", err)
return nil, ViewerRole, fmt.Errorf("failed to identify remote host: %w", err)
}
if whois.Node.IsTagged() {
return nil, fmt.Errorf("tagged nodes are not users")
return nil, ViewerRole, fmt.Errorf("tagged nodes are not users")
}
if whois.UserProfile == nil || whois.UserProfile.LoginName == "" {
return nil, fmt.Errorf("failed to identify remote user")
return nil, ViewerRole, fmt.Errorf("failed to identify remote user")
}
return whois.UserProfile, nil
role := ViewerRole
grants, err := tailcfg.UnmarshalCapJSON[aclGrant](whois.CapMap, aclCap)
if err != nil {
return nil, ViewerRole, fmt.Errorf("failed to unmarshal ACL grants: %w", err)
}
for _, g := range grants {
r, err := roleFromString(g.Role)
if err != nil {
return nil, ViewerRole, fmt.Errorf("failed to parse role: %w", err)
}
role = max(role, r)
}
return whois.UserProfile, role, nil
}

View File

@@ -49,6 +49,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+
tailscale.com from tailscale.com/version
tailscale.com/envknob from tailscale.com/tsweb+
tailscale.com/feature from tailscale.com/tsweb
tailscale.com/kube/kubetypes from tailscale.com/envknob
tailscale.com/metrics from tailscale.com/net/stunserver+
tailscale.com/net/netaddr from tailscale.com/net/tsaddr
@@ -57,8 +58,8 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
tailscale.com/net/tsaddr from tailscale.com/tsweb
tailscale.com/syncs from tailscale.com/metrics
tailscale.com/tailcfg from tailscale.com/version
tailscale.com/tsweb from tailscale.com/cmd/stund
tailscale.com/tsweb/promvarz from tailscale.com/tsweb
tailscale.com/tsweb from tailscale.com/cmd/stund+
tailscale.com/tsweb/promvarz from tailscale.com/cmd/stund
tailscale.com/tsweb/varz from tailscale.com/tsweb+
tailscale.com/types/dnstype from tailscale.com/tailcfg
tailscale.com/types/ipproto from tailscale.com/tailcfg
@@ -194,7 +195,7 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
hash/maphash from go4.org/mem
html from net/http/pprof+
internal/abi from crypto/x509/internal/macos+
internal/asan from syscall+
internal/asan from internal/runtime/maps+
internal/bisect from internal/godebug
internal/bytealg from bytes+
internal/byteorder from crypto/cipher+
@@ -204,12 +205,12 @@ tailscale.com/cmd/stund dependencies: (generated by github.com/tailscale/depawar
internal/filepathlite from os+
internal/fmtsort from fmt
internal/goarch from crypto/internal/fips140deps/cpu+
internal/godebug from crypto/tls+
internal/godebug from crypto/internal/fips140deps/godebug+
internal/godebugs from internal/godebug+
internal/goexperiment from runtime+
internal/goexperiment from hash/maphash+
internal/goos from crypto/x509+
internal/itoa from internal/poll+
internal/msan from syscall+
internal/msan from internal/runtime/maps+
internal/nettrace from net+
internal/oserror from io/fs+
internal/poll from net+

View File

@@ -15,6 +15,9 @@ import (
"tailscale.com/net/stunserver"
"tailscale.com/tsweb"
// Support for prometheus varz in tsweb
_ "tailscale.com/tsweb/promvarz"
)
var (

View File

@@ -43,6 +43,7 @@ import (
"tailscale.com/tailcfg"
"tailscale.com/types/key"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
"tailscale.com/util/must"
)
@@ -136,6 +137,17 @@ func debugCmd() *ffcli.Command {
Exec: runLocalCreds,
ShortHelp: "Print how to access Tailscale LocalAPI",
},
{
Name: "localapi",
ShortUsage: "tailscale debug localapi [<method>] <path> [<body| \"-\">]",
Exec: runLocalAPI,
ShortHelp: "Call a LocalAPI method directly",
FlagSet: (func() *flag.FlagSet {
fs := newFlagSet("localapi")
fs.BoolVar(&localAPIFlags.verbose, "v", false, "verbose; dump HTTP headers")
return fs
})(),
},
{
Name: "restun",
ShortUsage: "tailscale debug restun",
@@ -451,6 +463,81 @@ func runLocalCreds(ctx context.Context, args []string) error {
return nil
}
func looksLikeHTTPMethod(s string) bool {
if len(s) > len("OPTIONS") {
return false
}
for _, r := range s {
if r < 'A' || r > 'Z' {
return false
}
}
return true
}
var localAPIFlags struct {
verbose bool
}
func runLocalAPI(ctx context.Context, args []string) error {
if len(args) == 0 {
return errors.New("expected at least one argument")
}
method := "GET"
if looksLikeHTTPMethod(args[0]) {
method = args[0]
args = args[1:]
if len(args) == 0 {
return errors.New("expected at least one argument after method")
}
}
path := args[0]
if !strings.HasPrefix(path, "/localapi/") {
if !strings.Contains(path, "/") {
path = "/localapi/v0/" + path
} else {
path = "/localapi/" + path
}
}
var body io.Reader
if len(args) > 1 {
if args[1] == "-" {
fmt.Fprintf(Stderr, "# reading request body from stdin...\n")
all, err := io.ReadAll(os.Stdin)
if err != nil {
return fmt.Errorf("reading Stdin: %q", err)
}
body = bytes.NewReader(all)
} else {
body = strings.NewReader(args[1])
}
}
req, err := http.NewRequest(method, "http://local-tailscaled.sock"+path, body)
if err != nil {
return err
}
fmt.Fprintf(Stderr, "# doing request %s %s\n", method, path)
res, err := localClient.DoLocalRequest(req)
if err != nil {
return err
}
is2xx := res.StatusCode >= 200 && res.StatusCode <= 299
if localAPIFlags.verbose {
res.Write(Stdout)
} else {
if !is2xx {
fmt.Fprintf(Stderr, "# Response status %s\n", res.Status)
}
io.Copy(Stdout, res.Body)
}
if is2xx {
return nil
}
return errors.New(res.Status)
}
type localClientRoundTripper struct{}
func (localClientRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
@@ -870,7 +957,10 @@ func runTS2021(ctx context.Context, args []string) error {
logf = log.Printf
}
netMon, err := netmon.New(logger.WithPrefix(logf, "netmon: "))
bus := eventbus.New()
defer bus.Close()
netMon, err := netmon.New(bus, logger.WithPrefix(logf, "netmon: "))
if err != nil {
return fmt.Errorf("creating netmon: %w", err)
}

View File

@@ -24,6 +24,7 @@ import (
"tailscale.com/net/tlsdial"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
var netcheckCmd = &ffcli.Command{
@@ -48,14 +49,19 @@ var netcheckArgs struct {
func runNetcheck(ctx context.Context, args []string) error {
logf := logger.WithPrefix(log.Printf, "portmap: ")
netMon, err := netmon.New(logf)
bus := eventbus.New()
defer bus.Close()
netMon, err := netmon.New(bus, logf)
if err != nil {
return err
}
// Ensure that we close the portmapper after running a netcheck; this
// will release any port mappings created.
pm := portmapper.NewClient(logf, netMon, nil, nil, nil)
pm := portmapper.NewClient(portmapper.Config{
Logf: logf,
NetMon: netMon,
})
defer pm.Close()
c := &netcheck.Client{

View File

@@ -5,6 +5,10 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
W 💣 github.com/alexbrainman/sspi from github.com/alexbrainman/sspi/internal/common+
W github.com/alexbrainman/sspi/internal/common from github.com/alexbrainman/sspi/negotiate
W 💣 github.com/alexbrainman/sspi/negotiate from tailscale.com/net/tshttpproxy
github.com/coder/websocket from tailscale.com/util/eventbus
github.com/coder/websocket/internal/errd from github.com/coder/websocket
github.com/coder/websocket/internal/util from github.com/coder/websocket
github.com/coder/websocket/internal/xsync from github.com/coder/websocket
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/pe+
W 💣 github.com/dblohm7/wingoes/pe from tailscale.com/util/winutil/authenticode
@@ -89,6 +93,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/drive from tailscale.com/client/local+
tailscale.com/envknob from tailscale.com/client/local+
tailscale.com/envknob/featureknob from tailscale.com/client/web
tailscale.com/feature from tailscale.com/tsweb
tailscale.com/feature/capture/dissector from tailscale.com/cmd/tailscale/cli
tailscale.com/health from tailscale.com/net/tlsdial+
tailscale.com/health/healthmsg from tailscale.com/cmd/tailscale/cli
@@ -131,7 +136,8 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/tstime from tailscale.com/control/controlhttp+
tailscale.com/tstime/mono from tailscale.com/tstime/rate
tailscale.com/tstime/rate from tailscale.com/cmd/tailscale/cli+
tailscale.com/tsweb/varz from tailscale.com/util/usermetric
tailscale.com/tsweb from tailscale.com/util/eventbus
tailscale.com/tsweb/varz from tailscale.com/util/usermetric+
tailscale.com/types/dnstype from tailscale.com/tailcfg+
tailscale.com/types/empty from tailscale.com/ipn
tailscale.com/types/ipproto from tailscale.com/ipn+
@@ -156,6 +162,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
💣 tailscale.com/util/deephash from tailscale.com/util/syspolicy/setting
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics
tailscale.com/util/dnsname from tailscale.com/cmd/tailscale/cli+
tailscale.com/util/eventbus from tailscale.com/net/portmapper+
tailscale.com/util/groupmember from tailscale.com/client/web
💣 tailscale.com/util/hashx from tailscale.com/util/deephash
tailscale.com/util/httpm from tailscale.com/client/tailscale+
@@ -166,6 +173,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
tailscale.com/util/must from tailscale.com/clientupdate/distsign+
tailscale.com/util/nocasemaps from tailscale.com/types/ipproto
tailscale.com/util/quarantine from tailscale.com/cmd/tailscale/cli
tailscale.com/util/rands from tailscale.com/tsweb
tailscale.com/util/set from tailscale.com/derp+
tailscale.com/util/singleflight from tailscale.com/net/dnscache+
tailscale.com/util/slicesx from tailscale.com/net/dns/recursive+
@@ -328,12 +336,12 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
hash/crc32 from compress/gzip+
hash/maphash from go4.org/mem
html from html/template+
html/template from github.com/gorilla/csrf
html/template from github.com/gorilla/csrf+
image from github.com/skip2/go-qrcode+
image/color from github.com/skip2/go-qrcode+
image/png from github.com/skip2/go-qrcode
internal/abi from crypto/x509/internal/macos+
internal/asan from syscall+
internal/asan from internal/runtime/maps+
internal/bisect from internal/godebug
internal/bytealg from bytes+
internal/byteorder from crypto/cipher+
@@ -345,14 +353,15 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
internal/goarch from crypto/internal/fips140deps/cpu+
internal/godebug from archive/tar+
internal/godebugs from internal/godebug+
internal/goexperiment from runtime+
internal/goexperiment from hash/maphash+
internal/goos from crypto/x509+
internal/itoa from internal/poll+
internal/msan from syscall+
internal/msan from internal/runtime/maps+
internal/nettrace from net+
internal/oserror from io/fs+
internal/poll from net+
internal/profilerecord from runtime
internal/profile from net/http/pprof
internal/profilerecord from runtime+
internal/race from internal/poll+
internal/reflectlite from context+
internal/runtime/atomic from internal/runtime/exithook+
@@ -394,6 +403,7 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
net/http/httputil from tailscale.com/client/web+
net/http/internal from net/http+
net/http/internal/ascii from net/http+
net/http/pprof from tailscale.com/tsweb
net/netip from go4.org/netipx+
net/textproto from golang.org/x/net/http/httpguts+
net/url from crypto/x509+
@@ -408,6 +418,8 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
regexp/syntax from regexp
runtime from archive/tar+
runtime/debug from tailscale.com+
runtime/pprof from net/http/pprof
runtime/trace from net/http/pprof
slices from tailscale.com/client/web+
sort from compress/flate+
strconv from archive/tar+

View File

@@ -27,6 +27,7 @@ import (
"tailscale.com/net/tshttpproxy"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
"tailscale.com/util/eventbus"
)
var debugArgs struct {
@@ -72,11 +73,14 @@ func debugMode(args []string) error {
}
func runMonitor(ctx context.Context, loop bool) error {
b := eventbus.New()
defer b.Close()
dump := func(st *netmon.State) {
j, _ := json.MarshalIndent(st, "", " ")
os.Stderr.Write(j)
}
mon, err := netmon.New(log.Printf)
mon, err := netmon.New(b, log.Printf)
if err != nil {
return err
}

View File

@@ -81,6 +81,10 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
L github.com/aws/smithy-go/transport/http from github.com/aws/aws-sdk-go-v2/aws/middleware+
L github.com/aws/smithy-go/transport/http/internal/io from github.com/aws/smithy-go/transport/http
L github.com/aws/smithy-go/waiter from github.com/aws/aws-sdk-go-v2/service/ssm
github.com/coder/websocket from tailscale.com/util/eventbus
github.com/coder/websocket/internal/errd from github.com/coder/websocket
github.com/coder/websocket/internal/util from github.com/coder/websocket
github.com/coder/websocket/internal/xsync from github.com/coder/websocket
L github.com/coreos/go-iptables/iptables from tailscale.com/util/linuxfw
LD 💣 github.com/creack/pty from tailscale.com/ssh/tailssh
W 💣 github.com/dblohm7/wingoes from github.com/dblohm7/wingoes/com+
@@ -286,7 +290,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
tailscale.com/ipn/store/mem from tailscale.com/ipn/ipnlocal+
L tailscale.com/kube/kubeapi from tailscale.com/ipn/store/kubestore+
L tailscale.com/kube/kubeclient from tailscale.com/ipn/store/kubestore
tailscale.com/kube/kubetypes from tailscale.com/envknob
tailscale.com/kube/kubetypes from tailscale.com/envknob+
tailscale.com/licenses from tailscale.com/client/web
tailscale.com/log/filelogger from tailscale.com/logpolicy
tailscale.com/log/sockstatlog from tailscale.com/ipn/ipnlocal
@@ -353,6 +357,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
tailscale.com/tstime from tailscale.com/control/controlclient+
tailscale.com/tstime/mono from tailscale.com/net/tstun+
tailscale.com/tstime/rate from tailscale.com/derp+
tailscale.com/tsweb from tailscale.com/util/eventbus
tailscale.com/tsweb/varz from tailscale.com/cmd/tailscaled+
tailscale.com/types/appctype from tailscale.com/ipn/ipnlocal
tailscale.com/types/dnstype from tailscale.com/ipn/ipnlocal+
@@ -382,6 +387,7 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
💣 tailscale.com/util/deephash from tailscale.com/ipn/ipnlocal+
L 💣 tailscale.com/util/dirwalk from tailscale.com/metrics+
tailscale.com/util/dnsname from tailscale.com/appc+
tailscale.com/util/eventbus from tailscale.com/tsd+
tailscale.com/util/execqueue from tailscale.com/control/controlclient+
tailscale.com/util/goroutines from tailscale.com/ipn/ipnlocal
tailscale.com/util/groupmember from tailscale.com/client/web+
@@ -587,9 +593,9 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
hash/crc32 from compress/gzip+
hash/maphash from go4.org/mem
html from html/template+
html/template from github.com/gorilla/csrf
html/template from github.com/gorilla/csrf+
internal/abi from crypto/x509/internal/macos+
internal/asan from syscall+
internal/asan from internal/runtime/maps+
internal/bisect from internal/godebug
internal/bytealg from bytes+
internal/byteorder from crypto/cipher+
@@ -601,10 +607,10 @@ tailscale.com/cmd/tailscaled dependencies: (generated by github.com/tailscale/de
internal/goarch from crypto/internal/fips140deps/cpu+
internal/godebug from archive/tar+
internal/godebugs from internal/godebug+
internal/goexperiment from runtime+
internal/goexperiment from hash/maphash+
internal/goos from crypto/x509+
internal/itoa from internal/poll+
internal/msan from syscall+
internal/msan from internal/runtime/maps+
internal/nettrace from net+
internal/oserror from io/fs+
internal/poll from net+

View File

@@ -339,7 +339,9 @@ var debugMux *http.ServeMux
func run() (err error) {
var logf logger.Logf = log.Printf
sys := new(tsd.System)
// Install an event bus as early as possible, so that it's
// available universally when setting up everything else.
sys := tsd.NewSystem()
// Parse config, if specified, to fail early if it's invalid.
var conf *conffile.Config
@@ -354,9 +356,7 @@ func run() (err error) {
var netMon *netmon.Monitor
isWinSvc := isWindowsService()
if !isWinSvc {
netMon, err = netmon.New(func(format string, args ...any) {
logf(format, args...)
})
netMon, err = netmon.New(sys.Bus.Get(), logf)
if err != nil {
return fmt.Errorf("netmon.New: %w", err)
}

View File

@@ -327,8 +327,8 @@ func beWindowsSubprocess() bool {
log.Printf("Error pre-loading \"%s\": %v", fqWintunPath, err)
}
sys := new(tsd.System)
netMon, err := netmon.New(log.Printf)
sys := tsd.NewSystem()
netMon, err := netmon.New(sys.Bus.Get(), log.Printf)
if err != nil {
log.Fatalf("Could not create netMon: %v", err)
}

View File

@@ -100,7 +100,7 @@ func newIPN(jsConfig js.Value) map[string]any {
logtail := logtail.NewLogger(c, log.Printf)
logf := logtail.Logf
sys := new(tsd.System)
sys := tsd.NewSystem()
sys.Set(store)
dialer := &tsdial.Dialer{Logf: logf}
eng, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{

View File

@@ -18,6 +18,9 @@ import (
"tailscale.com/derp/xdp"
"tailscale.com/net/netutil"
"tailscale.com/tsweb"
// Support for prometheus varz in tsweb
_ "tailscale.com/tsweb/promvarz"
)
var (

View File

@@ -96,6 +96,9 @@ func (a *Dialer) httpsFallbackDelay() time.Duration {
var _ = envknob.RegisterBool("TS_USE_CONTROL_DIAL_PLAN") // to record at init time whether it's in use
func (a *Dialer) dial(ctx context.Context) (*ClientConn, error) {
a.logPort80Failure.Store(true)
// If we don't have a dial plan, just fall back to dialing the single
// host we know about.
useDialPlan := envknob.BoolDefaultTrue("TS_USE_CONTROL_DIAL_PLAN")
@@ -278,7 +281,9 @@ func (d *Dialer) forceNoise443() bool {
// This heuristic works around networks where port 80 is MITMed and
// appears to work for a bit post-Upgrade but then gets closed,
// such as seen in https://github.com/tailscale/tailscale/issues/13597.
d.logf("controlhttp: forcing port 443 dial due to recent noise dial")
if d.logPort80Failure.CompareAndSwap(true, false) {
d.logf("controlhttp: forcing port 443 dial due to recent noise dial")
}
return true
}

View File

@@ -6,6 +6,7 @@ package controlhttp
import (
"net/http"
"net/url"
"sync/atomic"
"time"
"tailscale.com/health"
@@ -90,6 +91,11 @@ type Dialer struct {
proxyFunc func(*http.Request) (*url.URL, error) // or nil
// logPort80Failure is whether we should log about port 80 interceptions
// and forcing a port 443 dial. We do this only once per "dial" method
// which can result in many concurrent racing dialHost calls.
logPort80Failure atomic.Bool
// For tests only
drainFinished chan struct{}
omitCertErrorLogging bool

View File

@@ -32,7 +32,6 @@ import (
"tailscale.com/net/tsdial"
"tailscale.com/tailcfg"
"tailscale.com/tstest"
"tailscale.com/tstest/deptest"
"tailscale.com/tstime"
"tailscale.com/types/key"
"tailscale.com/types/logger"
@@ -822,14 +821,3 @@ func (c *closeTrackConn) Close() error {
c.d.noteClose(c)
return c.Conn.Close()
}
func TestDeps(t *testing.T) {
deptest.DepChecker{
GOOS: "darwin",
GOARCH: "arm64",
BadDeps: map[string]string{
// Only the controlhttpserver needs WebSockets...
"github.com/coder/websocket": "controlhttp client shouldn't need websockets",
},
}.Check(t)
}

View File

@@ -17,9 +17,7 @@ import (
"tailscale.com/derp"
"tailscale.com/net/netmon"
"tailscale.com/tstest/deptest"
"tailscale.com/types/key"
"tailscale.com/util/set"
)
func TestSendRecv(t *testing.T) {
@@ -487,23 +485,3 @@ func TestProbe(t *testing.T) {
}
}
}
func TestDeps(t *testing.T) {
deptest.DepChecker{
GOOS: "darwin",
GOARCH: "arm64",
BadDeps: map[string]string{
"github.com/coder/websocket": "shouldn't link websockets except on js/wasm",
},
}.Check(t)
deptest.DepChecker{
GOOS: "darwin",
GOARCH: "arm64",
Tags: "ts_debug_websockets",
WantDeps: set.Of(
"github.com/coder/websocket",
),
}.Check(t)
}

View File

@@ -429,10 +429,16 @@ func App() string {
// is a shared cert available.
func IsCertShareReadOnlyMode() bool {
m := String("TS_CERT_SHARE_MODE")
return m == modeRO
return m == "ro"
}
const modeRO = "ro"
// IsCertShareReadWriteMode returns true if this instance is the replica
// responsible for issuing and renewing TLS certs in an HA setup with certs
// shared between multiple replicas.
func IsCertShareReadWriteMode() bool {
m := String("TS_CERT_SHARE_MODE")
return m == "rw"
}
// CrashOnUnexpected reports whether the Tailscale client should panic
// on unexpected conditions. If TS_DEBUG_CRASH_ON_UNEXPECTED is set, that's

View File

@@ -27,6 +27,8 @@ type VIPService struct {
Addrs []string `json:"addrs,omitempty"`
// Comment is an optional text string for display in the admin panel.
Comment string `json:"comment,omitempty"`
// Annotations are optional key-value pairs that can be used to store arbitrary metadata.
Annotations map[string]string `json:"annotations,omitempty"`
// Ports are the ports of a VIPService that will be configured via Tailscale serve config.
// If set, any node wishing to advertise this VIPService must have this port configured via Tailscale serve.
Ports []string `json:"ports,omitempty"`

View File

@@ -145,9 +145,15 @@ func (c *ConfigVAlpha) ToPrefs() (MaskedPrefs, error) {
mp.AppConnector = *c.AppConnector
mp.AppConnectorSet = true
}
// Configfile should be the source of truth for whether this node
// advertises any services. We need to ensure that each reload updates
// currently advertised services as else the transition from 'some
// services are advertised' to 'advertised services are empty/unset in
// conffile' would have no effect (especially given that an empty
// service slice would be omitted from the JSON config).
mp.AdvertiseServicesSet = true
if c.AdvertiseServices != nil {
mp.AdvertiseServices = c.AdvertiseServices
mp.AdvertiseServicesSet = true
}
return mp, nil
}

View File

@@ -958,7 +958,9 @@ func (b *LocalBackend) linkChange(delta *netmon.ChangeDelta) {
if peerAPIListenAsync && b.netMap != nil && b.state == ipn.Running {
want := b.netMap.GetAddresses().Len()
if len(b.peerAPIListeners) < want {
have := len(b.peerAPIListeners)
b.logf("[v1] linkChange: have %d peerAPIListeners, want %d", have, want)
if have < want {
b.logf("linkChange: peerAPIListeners too low; trying again")
b.goTracker.Go(b.initPeerAPIListener)
}
@@ -2380,12 +2382,10 @@ func (b *LocalBackend) Start(opts ipn.Options) error {
}
b.applyPrefsToHostinfoLocked(hostinfo, prefs)
b.setNetMapLocked(nil)
persistv := prefs.Persist().AsStruct()
if persistv == nil {
persistv = new(persist.Persist)
}
b.updateFilterLocked(nil, ipn.PrefsView{})
if b.portpoll != nil {
b.portpollOnce.Do(func() {
@@ -2404,11 +2404,9 @@ func (b *LocalBackend) Start(opts ipn.Options) error {
}
var auditLogShutdown func()
// Audit logging is only available if the client has set up a proper persistent
// store for the logs in sys.
store, ok := b.sys.AuditLogStore.GetOK()
if !ok {
b.logf("auditlog: [unexpected] no persistent audit log storage configured. using memory store.")
// Use memory store by default if no explicit store is provided.
store = auditlog.NewLogStore(&memstore.Store{})
}
@@ -3481,18 +3479,20 @@ func (b *LocalBackend) onTailnetDefaultAutoUpdate(au bool) {
// can still manually enable auto-updates on this node.
return
}
b.logf("using tailnet default auto-update setting: %v", au)
prefsClone := prefs.AsStruct()
prefsClone.AutoUpdate.Apply = opt.NewBool(au)
_, err := b.editPrefsLockedOnEntry(&ipn.MaskedPrefs{
Prefs: *prefsClone,
AutoUpdateSet: ipn.AutoUpdatePrefsMask{
ApplySet: true,
},
}, unlock)
if err != nil {
b.logf("failed to apply tailnet-wide default for auto-updates (%v): %v", au, err)
return
if clientupdate.CanAutoUpdate() {
b.logf("using tailnet default auto-update setting: %v", au)
prefsClone := prefs.AsStruct()
prefsClone.AutoUpdate.Apply = opt.NewBool(au)
_, err := b.editPrefsLockedOnEntry(&ipn.MaskedPrefs{
Prefs: *prefsClone,
AutoUpdateSet: ipn.AutoUpdatePrefsMask{
ApplySet: true,
},
}, unlock)
if err != nil {
b.logf("failed to apply tailnet-wide default for auto-updates (%v): %v", au, err)
return
}
}
}
@@ -4968,7 +4968,7 @@ func (b *LocalBackend) authReconfig() {
return
}
oneCGNATRoute := shouldUseOneCGNATRoute(b.logf, b.sys.ControlKnobs(), version.OS())
oneCGNATRoute := shouldUseOneCGNATRoute(b.logf, b.sys.NetMon.Get(), b.sys.ControlKnobs(), version.OS())
rcfg := b.routerConfig(cfg, prefs, oneCGNATRoute)
err = b.e.Reconfig(cfg, rcfg, dcfg)
@@ -4992,7 +4992,7 @@ func (b *LocalBackend) authReconfig() {
//
// The versionOS is a Tailscale-style version ("iOS", "macOS") and not
// a runtime.GOOS.
func shouldUseOneCGNATRoute(logf logger.Logf, controlKnobs *controlknobs.Knobs, versionOS string) bool {
func shouldUseOneCGNATRoute(logf logger.Logf, mon *netmon.Monitor, controlKnobs *controlknobs.Knobs, versionOS string) bool {
if controlKnobs != nil {
// Explicit enabling or disabling always take precedence.
if v, ok := controlKnobs.OneCGNAT.Load().Get(); ok {
@@ -5007,7 +5007,7 @@ func shouldUseOneCGNATRoute(logf logger.Logf, controlKnobs *controlknobs.Knobs,
// use fine-grained routes if another interfaces is also using the CGNAT
// IP range.
if versionOS == "macOS" {
hasCGNATInterface, err := netmon.HasCGNATInterface()
hasCGNATInterface, err := mon.HasCGNATInterface()
if err != nil {
logf("shouldUseOneCGNATRoute: Could not determine if any interfaces use CGNAT: %v", err)
return false
@@ -5369,6 +5369,7 @@ func (b *LocalBackend) initPeerAPIListener() {
ln, err = ps.listen(a.Addr(), b.prevIfState)
if err != nil {
if peerAPIListenAsync {
b.logf("possibly transient peerapi listen(%q) error, will try again on linkChange: %v", a.Addr(), err)
// Expected. But we fix it later in linkChange
// ("peerAPIListeners too low").
continue
@@ -5920,6 +5921,9 @@ func (b *LocalBackend) requestEngineStatusAndWait() {
b.logf("requestEngineStatusAndWait: got status update.")
}
// [controlclient.Auto] implements [auditlog.Transport].
var _ auditlog.Transport = (*controlclient.Auto)(nil)
// setControlClientLocked sets the control client to cc,
// which may be nil.
//
@@ -5927,12 +5931,12 @@ func (b *LocalBackend) requestEngineStatusAndWait() {
func (b *LocalBackend) setControlClientLocked(cc controlclient.Client) {
b.cc = cc
b.ccAuto, _ = cc.(*controlclient.Auto)
if b.auditLogger != nil {
if t, ok := b.cc.(auditlog.Transport); ok && b.auditLogger != nil {
if err := b.auditLogger.SetProfileID(b.pm.CurrentProfile().ID()); err != nil {
b.logf("audit logger set profile ID failure: %v", err)
}
if err := b.auditLogger.Start(b.ccAuto); err != nil {
if err := b.auditLogger.Start(t); err != nil {
b.logf("audit logger start failure: %v", err)
}
}
@@ -7531,6 +7535,7 @@ func (b *LocalBackend) resetForProfileChangeLockedOnEntry(unlock unlockOnce) err
return nil
}
b.setNetMapLocked(nil) // Reset netmap.
b.updateFilterLocked(nil, ipn.PrefsView{})
// Reset the NetworkMap in the engine
b.e.SetNetworkMap(new(netmap.NetworkMap))
if prevCC := b.resetControlClientLocked(); prevCC != nil {

View File

@@ -44,6 +44,7 @@ import (
"tailscale.com/tsd"
"tailscale.com/tstest"
"tailscale.com/types/dnstype"
"tailscale.com/types/ipproto"
"tailscale.com/types/key"
"tailscale.com/types/logger"
"tailscale.com/types/logid"
@@ -60,6 +61,7 @@ import (
"tailscale.com/util/syspolicy/source"
"tailscale.com/wgengine"
"tailscale.com/wgengine/filter"
"tailscale.com/wgengine/filter/filtertype"
"tailscale.com/wgengine/wgcfg"
)
@@ -434,7 +436,7 @@ func (panicOnUseTransport) RoundTrip(*http.Request) (*http.Response, error) {
}
func newTestLocalBackend(t testing.TB) *LocalBackend {
return newTestLocalBackendWithSys(t, new(tsd.System))
return newTestLocalBackendWithSys(t, tsd.NewSystem())
}
// newTestLocalBackendWithSys creates a new LocalBackend with the given tsd.System.
@@ -446,7 +448,7 @@ func newTestLocalBackendWithSys(t testing.TB, sys *tsd.System) *LocalBackend {
sys.Set(new(mem.Store))
}
if _, ok := sys.Engine.GetOK(); !ok {
eng, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
eng, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
t.Fatalf("NewFakeUserspaceEngine: %v", err)
}
@@ -1508,6 +1510,15 @@ func TestReconfigureAppConnector(t *testing.T) {
func TestBackfillAppConnectorRoutes(t *testing.T) {
// Create backend with an empty app connector.
b := newTestBackend(t)
// newTestBackend creates a backend with a non-nil netmap,
// but this test requires a nil netmap.
// Otherwise, instead of backfilling, [LocalBackend.reconfigAppConnectorLocked]
// uses the domains and routes from netmap's [appctype.AppConnectorAttr].
// Additionally, a non-nil netmap makes reconfigAppConnectorLocked
// asynchronous, resulting in a flaky test.
// Therefore, we set the netmap to nil to simulate a fresh backend start
// or a profile switch where the netmap is not yet available.
b.setNetMapLocked(nil)
if err := b.Start(ipn.Options{}); err != nil {
t.Fatal(err)
}
@@ -4400,10 +4411,10 @@ func newLocalBackendWithTestControl(t *testing.T, enableLogging bool, newControl
if enableLogging {
logf = tstest.WhileTestRunningLogger(t)
}
sys := new(tsd.System)
sys := tsd.NewSystem()
store := new(mem.Store)
sys.Set(store)
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
t.Fatalf("NewFakeUserspaceEngine: %v", err)
}
@@ -4743,32 +4754,132 @@ func TestLoginNotifications(t *testing.T) {
// TestConfigFileReload tests that the LocalBackend reloads its configuration
// when the configuration file changes.
func TestConfigFileReload(t *testing.T) {
cfg1 := `{"Hostname": "foo", "Version": "alpha0"}`
f := filepath.Join(t.TempDir(), "cfg")
must.Do(os.WriteFile(f, []byte(cfg1), 0600))
sys := new(tsd.System)
sys.InitialConfig = must.Get(conffile.Load(f))
lb := newTestLocalBackendWithSys(t, sys)
must.Do(lb.Start(ipn.Options{}))
lb.mu.Lock()
hn := lb.hostinfo.Hostname
lb.mu.Unlock()
if hn != "foo" {
t.Fatalf("got %q; want %q", hn, "foo")
type testCase struct {
name string
initial *conffile.Config
updated *conffile.Config
checkFn func(*testing.T, *LocalBackend)
}
cfg2 := `{"Hostname": "bar", "Version": "alpha0"}`
must.Do(os.WriteFile(f, []byte(cfg2), 0600))
if !must.Get(lb.ReloadConfig()) {
t.Fatal("reload failed")
tests := []testCase{
{
name: "hostname_change",
initial: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
Hostname: ptr.To("initial-host"),
},
},
updated: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
Hostname: ptr.To("updated-host"),
},
},
checkFn: func(t *testing.T, b *LocalBackend) {
if got := b.Prefs().Hostname(); got != "updated-host" {
t.Errorf("hostname = %q; want updated-host", got)
}
},
},
{
name: "start_advertising_services",
initial: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
},
},
updated: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
AdvertiseServices: []string{"svc:abc", "svc:def"},
},
},
checkFn: func(t *testing.T, b *LocalBackend) {
if got := b.Prefs().AdvertiseServices().AsSlice(); !reflect.DeepEqual(got, []string{"svc:abc", "svc:def"}) {
t.Errorf("AdvertiseServices = %v; want [svc:abc, svc:def]", got)
}
},
},
{
name: "change_advertised_services",
initial: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
AdvertiseServices: []string{"svc:abc", "svc:def"},
},
},
updated: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
AdvertiseServices: []string{"svc:abc", "svc:ghi"},
},
},
checkFn: func(t *testing.T, b *LocalBackend) {
if got := b.Prefs().AdvertiseServices().AsSlice(); !reflect.DeepEqual(got, []string{"svc:abc", "svc:ghi"}) {
t.Errorf("AdvertiseServices = %v; want [svc:abc, svc:ghi]", got)
}
},
},
{
name: "unset_advertised_services",
initial: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
AdvertiseServices: []string{"svc:abc"},
},
},
updated: &conffile.Config{
Parsed: ipn.ConfigVAlpha{
Version: "alpha0",
},
},
checkFn: func(t *testing.T, b *LocalBackend) {
if b.Prefs().AdvertiseServices().Len() != 0 {
t.Errorf("got %d AdvertiseServices wants none", b.Prefs().AdvertiseServices().Len())
}
},
},
}
lb.mu.Lock()
hn = lb.hostinfo.Hostname
lb.mu.Unlock()
if hn != "bar" {
t.Fatalf("got %q; want %q", hn, "bar")
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "tailscale.conf")
// Write initial config
initialJSON, err := json.Marshal(tc.initial.Parsed)
if err != nil {
t.Fatal(err)
}
if err := os.WriteFile(path, initialJSON, 0644); err != nil {
t.Fatal(err)
}
// Create backend with initial config
tc.initial.Path = path
tc.initial.Raw = initialJSON
sys := tsd.NewSystem()
sys.InitialConfig = tc.initial
b := newTestLocalBackendWithSys(t, sys)
// Update config file
updatedJSON, err := json.Marshal(tc.updated.Parsed)
if err != nil {
t.Fatal(err)
}
if err := os.WriteFile(path, updatedJSON, 0644); err != nil {
t.Fatal(err)
}
// Trigger reload
if ok, err := b.ReloadConfig(); !ok || err != nil {
t.Fatalf("ReloadConfig() = %v, %v; want true, nil", ok, err)
}
// Check outcome
tc.checkFn(t, b)
})
}
}
@@ -5206,3 +5317,60 @@ func TestUpdateIngressLocked(t *testing.T) {
})
}
}
// TestSrcCapPacketFilter tests that LocalBackend handles packet filters with
// SrcCaps instead of Srcs (IPs)
func TestSrcCapPacketFilter(t *testing.T) {
lb := newLocalBackendWithTestControl(t, false, func(tb testing.TB, opts controlclient.Options) controlclient.Client {
return newClient(tb, opts)
})
if err := lb.Start(ipn.Options{}); err != nil {
t.Fatalf("(*LocalBackend).Start(): %v", err)
}
var k key.NodePublic
must.Do(k.UnmarshalText([]byte("nodekey:5c8f86d5fc70d924e55f02446165a5dae8f822994ad26bcf4b08fd841f9bf261")))
controlClient := lb.cc.(*mockControl)
controlClient.send(nil, "", false, &netmap.NetworkMap{
SelfNode: (&tailcfg.Node{
Addresses: []netip.Prefix{netip.MustParsePrefix("1.1.1.1/32")},
}).View(),
Peers: []tailcfg.NodeView{
(&tailcfg.Node{
Addresses: []netip.Prefix{netip.MustParsePrefix("2.2.2.2/32")},
ID: 2,
Key: k,
CapMap: tailcfg.NodeCapMap{"cap-X": nil}, // node 2 has cap
}).View(),
(&tailcfg.Node{
Addresses: []netip.Prefix{netip.MustParsePrefix("3.3.3.3/32")},
ID: 3,
Key: k,
CapMap: tailcfg.NodeCapMap{}, // node 3 does not have the cap
}).View(),
},
PacketFilter: []filtertype.Match{{
IPProto: views.SliceOf([]ipproto.Proto{ipproto.TCP}),
SrcCaps: []tailcfg.NodeCapability{"cap-X"}, // cap in packet filter rule
Dsts: []filtertype.NetPortRange{{
Net: netip.MustParsePrefix("1.1.1.1/32"),
Ports: filtertype.PortRange{
First: 22,
Last: 22,
},
}},
}},
})
f := lb.GetFilterForTest()
res := f.Check(netip.MustParseAddr("2.2.2.2"), netip.MustParseAddr("1.1.1.1"), 22, ipproto.TCP)
if res != filter.Accept {
t.Errorf("Check(2.2.2.2, ...) = %s, want %s", res, filter.Accept)
}
res = f.Check(netip.MustParseAddr("3.3.3.3"), netip.MustParseAddr("1.1.1.1"), 22, ipproto.TCP)
if !res.IsDrop() {
t.Error("IsDrop() for node without cap = false, want true")
}
}

View File

@@ -47,10 +47,10 @@ func TestLocalLogLines(t *testing.T) {
idA := logid(0xaa)
// set up a LocalBackend, super bare bones. No functional data.
sys := new(tsd.System)
sys := tsd.NewSystem()
store := new(mem.Store)
sys.Set(store)
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
t.Fatal(err)
}

View File

@@ -481,7 +481,7 @@ func (h *peerAPIHandler) handleServeInterfaces(w http.ResponseWriter, r *http.Re
fmt.Fprintf(w, "<h3>Could not get the default route: %s</h3>\n", html.EscapeString(err.Error()))
}
if hasCGNATInterface, err := netmon.HasCGNATInterface(); hasCGNATInterface {
if hasCGNATInterface, err := h.ps.b.sys.NetMon.Get().HasCGNATInterface(); hasCGNATInterface {
fmt.Fprintln(w, "<p>There is another interface using the CGNAT range.</p>")
} else if err != nil {
fmt.Fprintf(w, "<p>Could not check for CGNAT interfaces: %s</p>\n", html.EscapeString(err.Error()))

View File

@@ -34,6 +34,7 @@ import (
"tailscale.com/tstest"
"tailscale.com/types/logger"
"tailscale.com/types/netmap"
"tailscale.com/util/eventbus"
"tailscale.com/util/must"
"tailscale.com/util/usermetric"
"tailscale.com/wgengine"
@@ -643,9 +644,12 @@ func TestPeerAPIReplyToDNSQueries(t *testing.T) {
h.isSelf = false
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
bus := eventbus.New()
defer bus.Close()
ht := new(health.Tracker)
reg := new(usermetric.Registry)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg, bus)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, ht))
h.ps = &peerAPIServer{
b: &LocalBackend{
@@ -695,9 +699,12 @@ func TestPeerAPIPrettyReplyCNAME(t *testing.T) {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
bus := eventbus.New()
defer bus.Close()
ht := new(health.Tracker)
reg := new(usermetric.Registry)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg, bus)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, ht))
var a *appc.AppConnector
if shouldStore {
@@ -768,10 +775,12 @@ func TestPeerAPIReplyToDNSQueriesAreObserved(t *testing.T) {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
bus := eventbus.New()
defer bus.Close()
rc := &appctest.RouteCollector{}
ht := new(health.Tracker)
reg := new(usermetric.Registry)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg, bus)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, ht))
var a *appc.AppConnector
if shouldStore {
@@ -833,10 +842,12 @@ func TestPeerAPIReplyToDNSQueriesAreObservedWithCNAMEFlattening(t *testing.T) {
var h peerAPIHandler
h.remoteAddr = netip.MustParseAddrPort("100.150.151.152:12345")
bus := eventbus.New()
defer bus.Close()
ht := new(health.Tracker)
reg := new(usermetric.Registry)
rc := &appctest.RouteCollector{}
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg)
eng, _ := wgengine.NewFakeUserspaceEngine(logger.Discard, 0, ht, reg, bus)
pm := must.Get(newProfileManager(new(mem.Store), t.Logf, ht))
var a *appc.AppConnector
if shouldStore {

View File

@@ -877,11 +877,12 @@ func newTestBackend(t *testing.T) *LocalBackend {
logf = logger.WithPrefix(tstest.WhileTestRunningLogger(t), "... ")
}
sys := &tsd.System{}
sys := tsd.NewSystem()
e, err := wgengine.NewUserspaceEngine(logf, wgengine.Config{
SetSubsystem: sys.Set,
HealthTracker: sys.HealthTracker(),
Metrics: sys.UserMetricsRegistry(),
EventBus: sys.Bus.Get(),
})
if err != nil {
t.Fatal(err)

View File

@@ -295,10 +295,10 @@ func TestStateMachine(t *testing.T) {
c := qt.New(t)
logf := tstest.WhileTestRunningLogger(t)
sys := new(tsd.System)
sys := tsd.NewSystem()
store := new(testStateStorage)
sys.Set(store)
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
t.Fatalf("NewFakeUserspaceEngine: %v", err)
}
@@ -735,12 +735,10 @@ func TestStateMachine(t *testing.T) {
// b.Shutdown() explicitly ourselves.
previousCC.assertShutdown(false)
// Note: unpause happens because ipn needs to get at least one netmap
// on startup, otherwise UIs can't show the node list, login
// name, etc when in state ipn.Stopped.
// Arguably they shouldn't try. But they currently do.
nn := notifies.drain(2)
cc.assertCalls("New", "Login")
// We already have a netmap for this node,
// and WantRunning is false, so cc should be paused.
cc.assertCalls("New", "Login", "pause")
c.Assert(nn[0].Prefs, qt.IsNotNil)
c.Assert(nn[1].State, qt.IsNotNil)
c.Assert(nn[0].Prefs.WantRunning(), qt.IsFalse)
@@ -751,7 +749,11 @@ func TestStateMachine(t *testing.T) {
// When logged in but !WantRunning, ipn leaves us unpaused to retrieve
// the first netmap. Simulate that netmap being received, after which
// it should pause us, to avoid wasting CPU retrieving unnecessarily
// additional netmap updates.
// additional netmap updates. Since our LocalBackend instance already
// has a netmap, we will reset it to nil to simulate the first netmap
// retrieval.
b.setNetMapLocked(nil)
cc.assertCalls("unpause")
//
// TODO: really the various GUIs and prefs should be refactored to
// not require the netmap structure at all when starting while
@@ -853,7 +855,7 @@ func TestStateMachine(t *testing.T) {
// The last test case is the most common one: restarting when both
// logged in and WantRunning.
t.Logf("\n\nStart5")
notifies.expect(1)
notifies.expect(2)
c.Assert(b.Start(ipn.Options{}), qt.IsNil)
{
// NOTE: cc.Shutdown() is correct here, since we didn't call
@@ -861,30 +863,32 @@ func TestStateMachine(t *testing.T) {
previousCC.assertShutdown(false)
cc.assertCalls("New", "Login")
nn := notifies.drain(1)
nn := notifies.drain(2)
cc.assertCalls()
c.Assert(nn[0].Prefs, qt.IsNotNil)
c.Assert(nn[0].Prefs.LoggedOut(), qt.IsFalse)
c.Assert(nn[0].Prefs.WantRunning(), qt.IsTrue)
c.Assert(b.State(), qt.Equals, ipn.NoState)
// We're logged in and have a valid netmap, so we should
// be in the Starting state.
c.Assert(nn[1].State, qt.IsNotNil)
c.Assert(*nn[1].State, qt.Equals, ipn.Starting)
c.Assert(b.State(), qt.Equals, ipn.Starting)
}
// Control server accepts our valid key from before.
t.Logf("\n\nLoginFinished5")
notifies.expect(1)
notifies.expect(0)
cc.send(nil, "", true, &netmap.NetworkMap{
SelfNode: (&tailcfg.Node{MachineAuthorized: true}).View(),
})
{
nn := notifies.drain(1)
notifies.drain(0)
cc.assertCalls()
// NOTE: No LoginFinished message since no interactive
// login was needed.
c.Assert(nn[0].State, qt.IsNotNil)
c.Assert(ipn.Starting, qt.Equals, *nn[0].State)
// NOTE: No prefs change this time. WantRunning stays true.
// We were in Starting in the first place, so that doesn't
// change either.
// change either, so we don't expect any notifications.
c.Assert(ipn.Starting, qt.Equals, b.State())
}
t.Logf("\n\nExpireKey")
@@ -930,9 +934,9 @@ func TestStateMachine(t *testing.T) {
func TestEditPrefsHasNoKeys(t *testing.T) {
logf := tstest.WhileTestRunningLogger(t)
sys := new(tsd.System)
sys := tsd.NewSystem()
sys.Set(new(mem.Store))
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
t.Fatalf("NewFakeUserspaceEngine: %v", err)
}
@@ -1010,10 +1014,10 @@ func TestWGEngineStatusRace(t *testing.T) {
t.Skip("test fails")
c := qt.New(t)
logf := tstest.WhileTestRunningLogger(t)
sys := new(tsd.System)
sys := tsd.NewSystem()
sys.Set(new(mem.Store))
eng, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set)
eng, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.Bus.Get())
c.Assert(err, qt.IsNil)
t.Cleanup(eng.Close)
sys.Set(eng)

View File

@@ -517,12 +517,12 @@ type newControlClientFn func(tb testing.TB, opts controlclient.Options) controlc
func newLocalBackendWithTestControl(tb testing.TB, newControl newControlClientFn, enableLogging bool) *ipnlocal.LocalBackend {
tb.Helper()
sys := &tsd.System{}
sys := tsd.NewSystem()
store := &mem.Store{}
sys.Set(store)
logf := testLogger(tb, enableLogging)
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
e, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
tb.Fatalf("NewFakeUserspaceEngine: %v", err)
}

View File

@@ -56,6 +56,7 @@ import (
"tailscale.com/types/ptr"
"tailscale.com/types/tkatype"
"tailscale.com/util/clientmetric"
"tailscale.com/util/eventbus"
"tailscale.com/util/httphdr"
"tailscale.com/util/httpm"
"tailscale.com/util/mak"
@@ -818,23 +819,31 @@ func (h *Handler) serveDebugPortmap(w http.ResponseWriter, r *http.Request) {
done := make(chan bool, 1)
var c *portmapper.Client
c = portmapper.NewClient(logger.WithPrefix(logf, "portmapper: "), h.b.NetMon(), debugKnobs, h.b.ControlKnobs(), func() {
logf("portmapping changed.")
logf("have mapping: %v", c.HaveMapping())
c = portmapper.NewClient(portmapper.Config{
Logf: logger.WithPrefix(logf, "portmapper: "),
NetMon: h.b.NetMon(),
DebugKnobs: debugKnobs,
ControlKnobs: h.b.ControlKnobs(),
OnChange: func() {
logf("portmapping changed.")
logf("have mapping: %v", c.HaveMapping())
if ext, ok := c.GetCachedMappingOrStartCreatingOne(); ok {
logf("cb: mapping: %v", ext)
select {
case done <- true:
default:
if ext, ok := c.GetCachedMappingOrStartCreatingOne(); ok {
logf("cb: mapping: %v", ext)
select {
case done <- true:
default:
}
return
}
return
}
logf("cb: no mapping")
logf("cb: no mapping")
},
})
defer c.Close()
netMon, err := netmon.New(logger.WithPrefix(logf, "monitor: "))
bus := eventbus.New()
defer bus.Close()
netMon, err := netmon.New(bus, logger.WithPrefix(logf, "monitor: "))
if err != nil {
logf("error creating monitor: %v", err)
return

View File

@@ -336,10 +336,10 @@ func TestServeWatchIPNBus(t *testing.T) {
func newTestLocalBackend(t testing.TB) *ipnlocal.LocalBackend {
var logf logger.Logf = logger.Discard
sys := new(tsd.System)
sys := tsd.NewSystem()
store := new(mem.Store)
sys.Set(store)
eng, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry())
eng, err := wgengine.NewFakeUserspaceEngine(logf, sys.Set, sys.HealthTracker(), sys.UserMetricsRegistry(), sys.Bus.Get())
if err != nil {
t.Fatalf("NewFakeUserspaceEngine: %v", err)
}

View File

@@ -13,11 +13,14 @@ import (
"strings"
"time"
"tailscale.com/envknob"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/kube/kubeapi"
"tailscale.com/kube/kubeclient"
"tailscale.com/kube/kubetypes"
"tailscale.com/types/logger"
"tailscale.com/util/dnsname"
"tailscale.com/util/mak"
)
@@ -32,21 +35,37 @@ const (
reasonTailscaleStateLoadFailed = "TailscaleStateLoadFailed"
eventTypeWarning = "Warning"
eventTypeNormal = "Normal"
keyTLSCert = "tls.crt"
keyTLSKey = "tls.key"
)
// Store is an ipn.StateStore that uses a Kubernetes Secret for persistence.
type Store struct {
client kubeclient.Client
canPatch bool
secretName string
client kubeclient.Client
canPatch bool
secretName string // state Secret
certShareMode string // 'ro', 'rw', or empty
podName string
// memory holds the latest tailscale state. Writes write state to a kube Secret and memory, Reads read from
// memory.
// memory holds the latest tailscale state. Writes write state to a kube
// Secret and memory, Reads read from memory.
memory mem.Store
}
// New returns a new Store that persists to the named Secret.
func New(_ logger.Logf, secretName string) (*Store, error) {
// New returns a new Store that persists state to Kubernets Secret(s).
// Tailscale state is stored in a Secret named by the secretName parameter.
// TLS certs are stored and retrieved from state Secret or separate Secrets
// named after TLS endpoints if running in cert share mode.
func New(logf logger.Logf, secretName string) (*Store, error) {
c, err := newClient()
if err != nil {
return nil, err
}
return newWithClient(logf, c, secretName)
}
func newClient() (kubeclient.Client, error) {
c, err := kubeclient.New("tailscale-state-store")
if err != nil {
return nil, err
@@ -55,6 +74,10 @@ func New(_ logger.Logf, secretName string) (*Store, error) {
// Derive the API server address from the environment variables
c.SetURL(fmt.Sprintf("https://%s:%s", os.Getenv("KUBERNETES_SERVICE_HOST"), os.Getenv("KUBERNETES_SERVICE_PORT_HTTPS")))
}
return c, nil
}
func newWithClient(logf logger.Logf, c kubeclient.Client, secretName string) (*Store, error) {
canPatch, _, err := c.CheckSecretPermissions(context.Background(), secretName)
if err != nil {
return nil, err
@@ -63,11 +86,30 @@ func New(_ logger.Logf, secretName string) (*Store, error) {
client: c,
canPatch: canPatch,
secretName: secretName,
podName: os.Getenv("POD_NAME"),
}
if envknob.IsCertShareReadWriteMode() {
s.certShareMode = "rw"
} else if envknob.IsCertShareReadOnlyMode() {
s.certShareMode = "ro"
}
// Load latest state from kube Secret if it already exists.
if err := s.loadState(); err != nil && err != ipn.ErrStateNotExist {
return nil, fmt.Errorf("error loading state from kube Secret: %w", err)
}
// If we are in cert share mode, pre-load existing shared certs.
if s.certShareMode == "rw" || s.certShareMode == "ro" {
sel := s.certSecretSelector()
if err := s.loadCerts(context.Background(), sel); err != nil {
// We will attempt to again retrieve the certs from Secrets when a request for an HTTPS endpoint
// is received.
log.Printf("[unexpected] error loading TLS certs: %v", err)
}
}
if s.certShareMode == "ro" {
go s.runCertReload(context.Background(), logf)
}
return s, nil
}
@@ -84,27 +126,110 @@ func (s *Store) ReadState(id ipn.StateKey) ([]byte, error) {
// WriteState implements the StateStore interface.
func (s *Store) WriteState(id ipn.StateKey, bs []byte) (err error) {
return s.updateStateSecret(map[string][]byte{string(id): bs})
}
// WriteTLSCertAndKey writes a TLS cert and key to domain.crt, domain.key fields of a Tailscale Kubernetes node's state
// Secret.
func (s *Store) WriteTLSCertAndKey(domain string, cert, key []byte) error {
return s.updateStateSecret(map[string][]byte{domain + ".crt": cert, domain + ".key": key})
}
func (s *Store) updateStateSecret(data map[string][]byte) (err error) {
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer func() {
if err == nil {
for id, bs := range data {
// The in-memory store does not distinguish between values read from state Secret on
// init and values written to afterwards. Values read from the state
// Secret will always be sanitized, so we also need to sanitize values written to store
// later, so that the Read logic can just lookup keys in sanitized form.
s.memory.WriteState(ipn.StateKey(sanitizeKey(id)), bs)
}
s.memory.WriteState(ipn.StateKey(sanitizeKey(id)), bs)
}
}()
return s.updateSecret(map[string][]byte{string(id): bs}, s.secretName)
}
// WriteTLSCertAndKey writes a TLS cert and key to domain.crt, domain.key fields
// of a Tailscale Kubernetes node's state Secret.
func (s *Store) WriteTLSCertAndKey(domain string, cert, key []byte) (err error) {
if s.certShareMode == "ro" {
log.Printf("[unexpected] TLS cert and key write in read-only mode")
}
if err := dnsname.ValidHostname(domain); err != nil {
return fmt.Errorf("invalid domain name %q: %w", domain, err)
}
secretName := s.secretName
data := map[string][]byte{
domain + ".crt": cert,
domain + ".key": key,
}
// If we run in cert share mode, cert and key for a DNS name are written
// to a separate Secret.
if s.certShareMode == "rw" {
secretName = domain
data = map[string][]byte{
keyTLSCert: cert,
keyTLSKey: key,
}
}
if err := s.updateSecret(data, secretName); err != nil {
return fmt.Errorf("error writing TLS cert and key to Secret: %w", err)
}
// TODO(irbekrm): certs for write replicas are currently not
// written to memory to avoid out of sync memory state after
// Ingress resources have been recreated. This means that TLS
// certs for write replicas are retrieved from the Secret on
// each HTTPS request. This is a temporary solution till we
// implement a Secret watch.
if s.certShareMode != "rw" {
s.memory.WriteState(ipn.StateKey(domain+".crt"), cert)
s.memory.WriteState(ipn.StateKey(domain+".key"), key)
}
return nil
}
// ReadTLSCertAndKey reads a TLS cert and key from memory or from a
// domain-specific Secret. It first checks the in-memory store, if not found in
// memory and running cert store in read-only mode, looks up a Secret.
// Note that write replicas of HA Ingress always retrieve TLS certs from Secrets.
func (s *Store) ReadTLSCertAndKey(domain string) (cert, key []byte, err error) {
if err := dnsname.ValidHostname(domain); err != nil {
return nil, nil, fmt.Errorf("invalid domain name %q: %w", domain, err)
}
certKey := domain + ".crt"
keyKey := domain + ".key"
cert, err = s.memory.ReadState(ipn.StateKey(certKey))
if err == nil {
key, err = s.memory.ReadState(ipn.StateKey(keyKey))
if err == nil {
return cert, key, nil
}
}
if s.certShareMode == "" {
return nil, nil, ipn.ErrStateNotExist
}
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
secret, err := s.client.GetSecret(ctx, domain)
if err != nil {
if kubeclient.IsNotFoundErr(err) {
// TODO(irbekrm): we should return a more specific error
// that wraps ipn.ErrStateNotExist here.
return nil, nil, ipn.ErrStateNotExist
}
return nil, nil, fmt.Errorf("getting TLS Secret %q: %w", domain, err)
}
cert = secret.Data[keyTLSCert]
key = secret.Data[keyTLSKey]
if len(cert) == 0 || len(key) == 0 {
return nil, nil, ipn.ErrStateNotExist
}
// TODO(irbekrm): a read between these two separate writes would
// get a mismatched cert and key. Allow writing both cert and
// key to the memory store in a single, lock-protected operation.
//
// TODO(irbekrm): currently certs for write replicas of HA Ingress get
// retrieved from the cluster Secret on each HTTPS request to avoid a
// situation when after Ingress recreation stale certs are read from
// memory.
// Fix this by watching Secrets to ensure that memory store gets updated
// when Secrets are deleted.
if s.certShareMode == "ro" {
s.memory.WriteState(ipn.StateKey(certKey), cert)
s.memory.WriteState(ipn.StateKey(keyKey), key)
}
return cert, key, nil
}
func (s *Store) updateSecret(data map[string][]byte, secretName string) (err error) {
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer func() {
if err != nil {
if err := s.client.Event(ctx, eventTypeWarning, reasonTailscaleStateUpdateFailed, err.Error()); err != nil {
log.Printf("kubestore: error creating tailscaled state update Event: %v", err)
@@ -116,17 +241,17 @@ func (s *Store) updateStateSecret(data map[string][]byte) (err error) {
}
cancel()
}()
secret, err := s.client.GetSecret(ctx, s.secretName)
secret, err := s.client.GetSecret(ctx, secretName)
if err != nil {
// If the Secret does not exist, create it with the required data.
if kubeclient.IsNotFoundErr(err) {
if kubeclient.IsNotFoundErr(err) && s.canCreateSecret(secretName) {
return s.client.CreateSecret(ctx, &kubeapi.Secret{
TypeMeta: kubeapi.TypeMeta{
APIVersion: "v1",
Kind: "Secret",
},
ObjectMeta: kubeapi.ObjectMeta{
Name: s.secretName,
Name: secretName,
},
Data: func(m map[string][]byte) map[string][]byte {
d := make(map[string][]byte, len(m))
@@ -137,9 +262,9 @@ func (s *Store) updateStateSecret(data map[string][]byte) (err error) {
}(data),
})
}
return err
return fmt.Errorf("error getting Secret %s: %w", secretName, err)
}
if s.canPatch {
if s.canPatchSecret(secretName) {
var m []kubeclient.JSONPatch
// If the user has pre-created a Secret with no data, we need to ensure the top level /data field.
if len(secret.Data) == 0 {
@@ -166,8 +291,8 @@ func (s *Store) updateStateSecret(data map[string][]byte) (err error) {
})
}
}
if err := s.client.JSONPatchResource(ctx, s.secretName, kubeclient.TypeSecrets, m); err != nil {
return fmt.Errorf("error patching Secret %s: %w", s.secretName, err)
if err := s.client.JSONPatchResource(ctx, secretName, kubeclient.TypeSecrets, m); err != nil {
return fmt.Errorf("error patching Secret %s: %w", secretName, err)
}
return nil
}
@@ -176,9 +301,9 @@ func (s *Store) updateStateSecret(data map[string][]byte) (err error) {
mak.Set(&secret.Data, sanitizeKey(key), val)
}
if err := s.client.UpdateSecret(ctx, secret); err != nil {
return err
return fmt.Errorf("error updating Secret %s: %w", s.secretName, err)
}
return err
return nil
}
func (s *Store) loadState() (err error) {
@@ -202,6 +327,96 @@ func (s *Store) loadState() (err error) {
return nil
}
// runCertReload relists and reloads all TLS certs for endpoints shared by this
// node from Secrets other than the state Secret to ensure that renewed certs get eventually loaded.
// It is not critical to reload a cert immediately after
// renewal, so a daily check is acceptable.
// Currently (3/2025) this is only used for the shared HA Ingress certs on 'read' replicas.
// Note that if shared certs are not found in memory on an HTTPS request, we
// do a Secret lookup, so this mechanism does not need to ensure that newly
// added Ingresses' certs get loaded.
func (s *Store) runCertReload(ctx context.Context, logf logger.Logf) {
ticker := time.NewTicker(time.Hour * 24)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
sel := s.certSecretSelector()
if err := s.loadCerts(ctx, sel); err != nil {
logf("[unexpected] error reloading TLS certs: %v", err)
}
}
}
}
// loadCerts lists all Secrets matching the provided selector and loads TLS
// certs and keys from those.
func (s *Store) loadCerts(ctx context.Context, sel map[string]string) error {
ss, err := s.client.ListSecrets(ctx, sel)
if err != nil {
return fmt.Errorf("error listing TLS Secrets: %w", err)
}
for _, secret := range ss.Items {
if !hasTLSData(&secret) {
continue
}
// Only load secrets that have valid domain names (ending in .ts.net)
if !strings.HasSuffix(secret.Name, ".ts.net") {
continue
}
s.memory.WriteState(ipn.StateKey(secret.Name)+".crt", secret.Data[keyTLSCert])
s.memory.WriteState(ipn.StateKey(secret.Name)+".key", secret.Data[keyTLSKey])
}
return nil
}
// canCreateSecret returns true if this node should be allowed to create the given
// Secret in its namespace.
func (s *Store) canCreateSecret(secret string) bool {
// Only allow creating the state Secret (and not TLS Secrets).
return secret == s.secretName
}
// canPatchSecret returns true if this node should be allowed to patch the given
// Secret.
func (s *Store) canPatchSecret(secret string) bool {
// For backwards compatibility reasons, setups where the proxies are not
// given PATCH permissions for state Secrets are allowed. For TLS
// Secrets, we should always have PATCH permissions.
if secret == s.secretName {
return s.canPatch
}
return true
}
// certSecretSelector returns a label selector that can be used to list all
// Secrets that aren't Tailscale state Secrets and contain TLS certificates for
// HTTPS endpoints that this node serves.
// Currently (3/2025) this only applies to the Kubernetes Operator's ingress
// ProxyGroup.
func (s *Store) certSecretSelector() map[string]string {
if s.podName == "" {
return map[string]string{}
}
p := strings.LastIndex(s.podName, "-")
if p == -1 {
return map[string]string{}
}
pgName := s.podName[:p]
return map[string]string{
kubetypes.LabelSecretType: "certs",
kubetypes.LabelManaged: "true",
"tailscale.com/proxy-group": pgName,
}
}
// hasTLSData returns true if the provided Secret contains non-empty TLS cert and key.
func hasTLSData(s *kubeapi.Secret) bool {
return len(s.Data[keyTLSCert]) != 0 && len(s.Data[keyTLSKey]) != 0
}
// sanitizeKey converts any value that can be converted to a string into a valid Kubernetes Secret key.
// Valid characters are alphanumeric, -, _, and .
// https://kubernetes.io/docs/concepts/configuration/secret/#restriction-names-data.

View File

@@ -4,33 +4,37 @@
package kubestore
import (
"bytes"
"context"
"encoding/json"
"fmt"
"strings"
"testing"
"github.com/google/go-cmp/cmp"
"tailscale.com/envknob"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
"tailscale.com/kube/kubeapi"
"tailscale.com/kube/kubeclient"
)
func TestUpdateStateSecret(t *testing.T) {
func TestWriteState(t *testing.T) {
tests := []struct {
name string
initial map[string][]byte
updates map[string][]byte
key ipn.StateKey
value []byte
wantData map[string][]byte
allowPatch bool
}{
{
name: "basic_update",
name: "basic_write",
initial: map[string][]byte{
"existing": []byte("old"),
},
updates: map[string][]byte{
"foo": []byte("bar"),
},
key: "foo",
value: []byte("bar"),
wantData: map[string][]byte{
"existing": []byte("old"),
"foo": []byte("bar"),
@@ -42,35 +46,17 @@ func TestUpdateStateSecret(t *testing.T) {
initial: map[string][]byte{
"foo": []byte("old"),
},
updates: map[string][]byte{
"foo": []byte("new"),
},
key: "foo",
value: []byte("new"),
wantData: map[string][]byte{
"foo": []byte("new"),
},
allowPatch: true,
},
{
name: "multiple_updates",
initial: map[string][]byte{
"keep": []byte("keep"),
},
updates: map[string][]byte{
"foo": []byte("bar"),
"baz": []byte("qux"),
},
wantData: map[string][]byte{
"keep": []byte("keep"),
"foo": []byte("bar"),
"baz": []byte("qux"),
},
allowPatch: true,
},
{
name: "create_new_secret",
updates: map[string][]byte{
"foo": []byte("bar"),
},
name: "create_new_secret",
key: "foo",
value: []byte("bar"),
wantData: map[string][]byte{
"foo": []byte("bar"),
},
@@ -81,29 +67,23 @@ func TestUpdateStateSecret(t *testing.T) {
initial: map[string][]byte{
"foo": []byte("old"),
},
updates: map[string][]byte{
"foo": []byte("new"),
},
key: "foo",
value: []byte("new"),
wantData: map[string][]byte{
"foo": []byte("new"),
},
allowPatch: false,
},
{
name: "sanitize_keys",
name: "sanitize_key",
initial: map[string][]byte{
"clean-key": []byte("old"),
},
updates: map[string][]byte{
"dirty@key": []byte("new"),
"also/bad": []byte("value"),
"good.key": []byte("keep"),
},
key: "dirty@key",
value: []byte("new"),
wantData: map[string][]byte{
"clean-key": []byte("old"),
"dirty_key": []byte("new"),
"also_bad": []byte("value"),
"good.key": []byte("keep"),
},
allowPatch: true,
},
@@ -152,13 +132,13 @@ func TestUpdateStateSecret(t *testing.T) {
s := &Store{
client: client,
canPatch: tt.allowPatch,
secretName: "test-secret",
secretName: "ts-state",
memory: mem.Store{},
}
err := s.updateStateSecret(tt.updates)
err := s.WriteState(tt.key, tt.value)
if err != nil {
t.Errorf("updateStateSecret() error = %v", err)
t.Errorf("WriteState() error = %v", err)
return
}
@@ -168,16 +148,579 @@ func TestUpdateStateSecret(t *testing.T) {
}
// Verify memory store was updated
for k, v := range tt.updates {
got, err := s.memory.ReadState(ipn.StateKey(sanitizeKey(k)))
got, err := s.memory.ReadState(ipn.StateKey(sanitizeKey(string(tt.key))))
if err != nil {
t.Errorf("reading from memory store: %v", err)
}
if !cmp.Equal(got, tt.value) {
t.Errorf("memory store key %q = %v, want %v", tt.key, got, tt.value)
}
})
}
}
func TestWriteTLSCertAndKey(t *testing.T) {
const (
testDomain = "my-app.tailnetxyz.ts.net"
testCert = "fake-cert"
testKey = "fake-key"
)
tests := []struct {
name string
initial map[string][]byte // pre-existing cert and key
certShareMode string
allowPatch bool // whether client can patch the Secret
wantSecretName string // name of the Secret where cert and key should be written
wantSecretData map[string][]byte
wantMemoryStore map[ipn.StateKey][]byte
}{
{
name: "basic_write",
initial: map[string][]byte{
"existing": []byte("old"),
},
allowPatch: true,
wantSecretName: "ts-state",
wantSecretData: map[string][]byte{
"existing": []byte("old"),
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
wantMemoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
},
{
name: "cert_share_mode_write",
certShareMode: "rw",
allowPatch: true,
wantSecretName: "my-app.tailnetxyz.ts.net",
wantSecretData: map[string][]byte{
"tls.crt": []byte(testCert),
"tls.key": []byte(testKey),
},
},
{
name: "cert_share_mode_write_update_existing",
initial: map[string][]byte{
"tls.crt": []byte("old-cert"),
"tls.key": []byte("old-key"),
},
certShareMode: "rw",
allowPatch: true,
wantSecretName: "my-app.tailnetxyz.ts.net",
wantSecretData: map[string][]byte{
"tls.crt": []byte(testCert),
"tls.key": []byte(testKey),
},
},
{
name: "update_existing",
initial: map[string][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte("old-cert"),
"my-app.tailnetxyz.ts.net.key": []byte("old-key"),
},
certShareMode: "",
allowPatch: true,
wantSecretName: "ts-state",
wantSecretData: map[string][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
wantMemoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
},
{
name: "patch_denied",
certShareMode: "",
allowPatch: false,
wantSecretName: "ts-state",
wantSecretData: map[string][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
wantMemoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Set POD_NAME for testing selectors
envknob.Setenv("POD_NAME", "ingress-proxies-1")
defer envknob.Setenv("POD_NAME", "")
secret := tt.initial // track current state
client := &kubeclient.FakeClient{
GetSecretImpl: func(ctx context.Context, name string) (*kubeapi.Secret, error) {
if secret == nil {
return nil, &kubeapi.Status{Code: 404}
}
return &kubeapi.Secret{Data: secret}, nil
},
CheckSecretPermissionsImpl: func(ctx context.Context, name string) (bool, bool, error) {
return tt.allowPatch, true, nil
},
CreateSecretImpl: func(ctx context.Context, s *kubeapi.Secret) error {
if s.Name != tt.wantSecretName {
t.Errorf("CreateSecret called with wrong name, got %q, want %q", s.Name, tt.wantSecretName)
}
secret = s.Data
return nil
},
UpdateSecretImpl: func(ctx context.Context, s *kubeapi.Secret) error {
if s.Name != tt.wantSecretName {
t.Errorf("UpdateSecret called with wrong name, got %q, want %q", s.Name, tt.wantSecretName)
}
secret = s.Data
return nil
},
JSONPatchResourceImpl: func(ctx context.Context, name, resourceType string, patches []kubeclient.JSONPatch) error {
if !tt.allowPatch {
return &kubeapi.Status{Reason: "Forbidden"}
}
if name != tt.wantSecretName {
t.Errorf("JSONPatchResource called with wrong name, got %q, want %q", name, tt.wantSecretName)
}
if secret == nil {
secret = make(map[string][]byte)
}
for _, p := range patches {
if p.Op == "add" && p.Path == "/data" {
secret = p.Value.(map[string][]byte)
} else if p.Op == "add" && strings.HasPrefix(p.Path, "/data/") {
key := strings.TrimPrefix(p.Path, "/data/")
secret[key] = p.Value.([]byte)
}
}
return nil
},
}
s := &Store{
client: client,
canPatch: tt.allowPatch,
secretName: tt.wantSecretName,
certShareMode: tt.certShareMode,
memory: mem.Store{},
}
err := s.WriteTLSCertAndKey(testDomain, []byte(testCert), []byte(testKey))
if err != nil {
t.Errorf("WriteTLSCertAndKey() error = '%v'", err)
return
}
// Verify secret data
if diff := cmp.Diff(secret, tt.wantSecretData); diff != "" {
t.Errorf("secret data mismatch (-got +want):\n%s", diff)
}
// Verify memory store was updated
for key, want := range tt.wantMemoryStore {
got, err := s.memory.ReadState(key)
if err != nil {
t.Errorf("reading from memory store: %v", err)
continue
}
if !cmp.Equal(got, v) {
t.Errorf("memory store key %q = %v, want %v", k, got, v)
if !cmp.Equal(got, want) {
t.Errorf("memory store key %q = %v, want %v", key, got, want)
}
}
})
}
}
func TestReadTLSCertAndKey(t *testing.T) {
const (
testDomain = "my-app.tailnetxyz.ts.net"
testCert = "fake-cert"
testKey = "fake-key"
)
tests := []struct {
name string
memoryStore map[ipn.StateKey][]byte // pre-existing memory store state
certShareMode string
domain string
secretData map[string][]byte // data to return from mock GetSecret
secretGetErr error // error to return from mock GetSecret
wantCert []byte
wantKey []byte
wantErr error
// what should end up in memory store after the store is created
wantMemoryStore map[ipn.StateKey][]byte
}{
{
name: "found_in_memory",
memoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
domain: testDomain,
wantCert: []byte(testCert),
wantKey: []byte(testKey),
wantMemoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
},
{
name: "not_found_in_memory",
domain: testDomain,
wantErr: ipn.ErrStateNotExist,
},
{
name: "cert_share_ro_mode_found_in_secret",
certShareMode: "ro",
domain: testDomain,
secretData: map[string][]byte{
"tls.crt": []byte(testCert),
"tls.key": []byte(testKey),
},
wantCert: []byte(testCert),
wantKey: []byte(testKey),
wantMemoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
},
{
name: "cert_share_rw_mode_found_in_secret",
certShareMode: "rw",
domain: testDomain,
secretData: map[string][]byte{
"tls.crt": []byte(testCert),
"tls.key": []byte(testKey),
},
wantCert: []byte(testCert),
wantKey: []byte(testKey),
},
{
name: "cert_share_ro_mode_found_in_memory",
certShareMode: "ro",
memoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
domain: testDomain,
wantCert: []byte(testCert),
wantKey: []byte(testKey),
wantMemoryStore: map[ipn.StateKey][]byte{
"my-app.tailnetxyz.ts.net.crt": []byte(testCert),
"my-app.tailnetxyz.ts.net.key": []byte(testKey),
},
},
{
name: "cert_share_ro_mode_not_found",
certShareMode: "ro",
domain: testDomain,
secretGetErr: &kubeapi.Status{Code: 404},
wantErr: ipn.ErrStateNotExist,
},
{
name: "cert_share_ro_mode_empty_cert_in_secret",
certShareMode: "ro",
domain: testDomain,
secretData: map[string][]byte{
"tls.crt": {},
"tls.key": []byte(testKey),
},
wantErr: ipn.ErrStateNotExist,
},
{
name: "cert_share_ro_mode_kube_api_error",
certShareMode: "ro",
domain: testDomain,
secretGetErr: fmt.Errorf("api error"),
wantErr: fmt.Errorf("getting TLS Secret %q: api error", sanitizeKey(testDomain)),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
client := &kubeclient.FakeClient{
GetSecretImpl: func(ctx context.Context, name string) (*kubeapi.Secret, error) {
if tt.secretGetErr != nil {
return nil, tt.secretGetErr
}
return &kubeapi.Secret{Data: tt.secretData}, nil
},
}
s := &Store{
client: client,
secretName: "ts-state",
certShareMode: tt.certShareMode,
memory: mem.Store{},
}
// Initialize memory store
for k, v := range tt.memoryStore {
s.memory.WriteState(k, v)
}
gotCert, gotKey, err := s.ReadTLSCertAndKey(tt.domain)
if tt.wantErr != nil {
if err == nil {
t.Errorf("ReadTLSCertAndKey() error = nil, want error containing %v", tt.wantErr)
return
}
if !strings.Contains(err.Error(), tt.wantErr.Error()) {
t.Errorf("ReadTLSCertAndKey() error = %v, want error containing %v", err, tt.wantErr)
}
return
}
if err != nil {
t.Errorf("ReadTLSCertAndKey() unexpected error: %v", err)
return
}
if !bytes.Equal(gotCert, tt.wantCert) {
t.Errorf("ReadTLSCertAndKey() gotCert = %v, want %v", gotCert, tt.wantCert)
}
if !bytes.Equal(gotKey, tt.wantKey) {
t.Errorf("ReadTLSCertAndKey() gotKey = %v, want %v", gotKey, tt.wantKey)
}
// Verify memory store contents after operation
if tt.wantMemoryStore != nil {
for key, want := range tt.wantMemoryStore {
got, err := s.memory.ReadState(key)
if err != nil {
t.Errorf("reading from memory store: %v", err)
continue
}
if !bytes.Equal(got, want) {
t.Errorf("memory store key %q = %v, want %v", key, got, want)
}
}
}
})
}
}
func TestNewWithClient(t *testing.T) {
const (
secretName = "ts-state"
testCert = "fake-cert"
testKey = "fake-key"
)
certSecretsLabels := map[string]string{
"tailscale.com/secret-type": "certs",
"tailscale.com/managed": "true",
"tailscale.com/proxy-group": "ingress-proxies",
}
// Helper function to create Secret objects for testing
makeSecret := func(name string, labels map[string]string, certSuffix string) kubeapi.Secret {
return kubeapi.Secret{
ObjectMeta: kubeapi.ObjectMeta{
Name: name,
Labels: labels,
},
Data: map[string][]byte{
"tls.crt": []byte(testCert + certSuffix),
"tls.key": []byte(testKey + certSuffix),
},
}
}
tests := []struct {
name string
stateSecretContents map[string][]byte // data in state Secret
TLSSecrets []kubeapi.Secret // list of TLS cert Secrets
certMode string
secretGetErr error // error to return from GetSecret
secretsListErr error // error to return from ListSecrets
wantMemoryStoreContents map[ipn.StateKey][]byte
wantErr error
}{
{
name: "empty_state_secret",
stateSecretContents: map[string][]byte{},
wantMemoryStoreContents: map[ipn.StateKey][]byte{},
},
{
name: "state_secret_not_found",
secretGetErr: &kubeapi.Status{Code: 404},
wantMemoryStoreContents: map[ipn.StateKey][]byte{},
},
{
name: "state_secret_get_error",
secretGetErr: fmt.Errorf("some error"),
wantErr: fmt.Errorf("error loading state from kube Secret: some error"),
},
{
name: "load_existing_state",
stateSecretContents: map[string][]byte{
"foo": []byte("bar"),
"baz": []byte("qux"),
},
wantMemoryStoreContents: map[ipn.StateKey][]byte{
"foo": []byte("bar"),
"baz": []byte("qux"),
},
},
{
name: "load_select_certs_in_read_only_mode",
certMode: "ro",
stateSecretContents: map[string][]byte{
"foo": []byte("bar"),
},
TLSSecrets: []kubeapi.Secret{
makeSecret("app1.tailnetxyz.ts.net", certSecretsLabels, "1"),
makeSecret("app2.tailnetxyz.ts.net", certSecretsLabels, "2"),
makeSecret("some-other-secret", nil, "3"),
makeSecret("app3.other-proxies.ts.net", map[string]string{
"tailscale.com/secret-type": "certs",
"tailscale.com/managed": "true",
"tailscale.com/proxy-group": "some-other-proxygroup",
}, "4"),
},
wantMemoryStoreContents: map[ipn.StateKey][]byte{
"foo": []byte("bar"),
"app1.tailnetxyz.ts.net.crt": []byte(testCert + "1"),
"app1.tailnetxyz.ts.net.key": []byte(testKey + "1"),
"app2.tailnetxyz.ts.net.crt": []byte(testCert + "2"),
"app2.tailnetxyz.ts.net.key": []byte(testKey + "2"),
},
},
{
name: "load_select_certs_in_read_write_mode",
certMode: "rw",
stateSecretContents: map[string][]byte{
"foo": []byte("bar"),
},
TLSSecrets: []kubeapi.Secret{
makeSecret("app1.tailnetxyz.ts.net", certSecretsLabels, "1"),
makeSecret("app2.tailnetxyz.ts.net", certSecretsLabels, "2"),
makeSecret("some-other-secret", nil, "3"),
makeSecret("app3.other-proxies.ts.net", map[string]string{
"tailscale.com/secret-type": "certs",
"tailscale.com/managed": "true",
"tailscale.com/proxy-group": "some-other-proxygroup",
}, "4"),
},
wantMemoryStoreContents: map[ipn.StateKey][]byte{
"foo": []byte("bar"),
"app1.tailnetxyz.ts.net.crt": []byte(testCert + "1"),
"app1.tailnetxyz.ts.net.key": []byte(testKey + "1"),
"app2.tailnetxyz.ts.net.crt": []byte(testCert + "2"),
"app2.tailnetxyz.ts.net.key": []byte(testKey + "2"),
},
},
{
name: "list_cert_secrets_fails",
certMode: "ro",
stateSecretContents: map[string][]byte{
"foo": []byte("bar"),
},
secretsListErr: fmt.Errorf("list error"),
// The error is logged but not returned, and state is still loaded
wantMemoryStoreContents: map[ipn.StateKey][]byte{
"foo": []byte("bar"),
},
},
{
name: "cert_secrets_not_loaded_when_not_in_share_mode",
certMode: "",
stateSecretContents: map[string][]byte{
"foo": []byte("bar"),
},
TLSSecrets: []kubeapi.Secret{
makeSecret("app1.tailnetxyz.ts.net", certSecretsLabels, "1"),
},
wantMemoryStoreContents: map[ipn.StateKey][]byte{
"foo": []byte("bar"),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
envknob.Setenv("TS_CERT_SHARE_MODE", tt.certMode)
t.Setenv("POD_NAME", "ingress-proxies-1")
client := &kubeclient.FakeClient{
GetSecretImpl: func(ctx context.Context, name string) (*kubeapi.Secret, error) {
if tt.secretGetErr != nil {
return nil, tt.secretGetErr
}
if name == secretName {
return &kubeapi.Secret{Data: tt.stateSecretContents}, nil
}
return nil, &kubeapi.Status{Code: 404}
},
CheckSecretPermissionsImpl: func(ctx context.Context, name string) (bool, bool, error) {
return true, true, nil
},
ListSecretsImpl: func(ctx context.Context, selector map[string]string) (*kubeapi.SecretList, error) {
if tt.secretsListErr != nil {
return nil, tt.secretsListErr
}
var matchingSecrets []kubeapi.Secret
for _, secret := range tt.TLSSecrets {
matches := true
for k, v := range selector {
if secret.Labels[k] != v {
matches = false
break
}
}
if matches {
matchingSecrets = append(matchingSecrets, secret)
}
}
return &kubeapi.SecretList{Items: matchingSecrets}, nil
},
}
s, err := newWithClient(t.Logf, client, secretName)
if tt.wantErr != nil {
if err == nil {
t.Errorf("NewWithClient() error = nil, want error containing %v", tt.wantErr)
return
}
if !strings.Contains(err.Error(), tt.wantErr.Error()) {
t.Errorf("NewWithClient() error = %v, want error containing %v", err, tt.wantErr)
}
return
}
if err != nil {
t.Errorf("NewWithClient() unexpected error: %v", err)
return
}
// Verify memory store contents
gotJSON, err := s.memory.ExportToJSON()
if err != nil {
t.Errorf("ExportToJSON failed: %v", err)
return
}
var got map[ipn.StateKey][]byte
if err := json.Unmarshal(gotJSON, &got); err != nil {
t.Errorf("failed to unmarshal memory store JSON: %v", err)
return
}
want := tt.wantMemoryStoreContents
if want == nil {
want = map[ipn.StateKey][]byte{}
}
if diff := cmp.Diff(got, want); diff != "" {
t.Errorf("memory store contents mismatch (-got +want):\n%s", diff)
}
})
}
}

View File

@@ -517,6 +517,7 @@ _Appears in:_
| `statefulSet` _[StatefulSet](#statefulset)_ | Configuration parameters for the proxy's StatefulSet. Tailscale<br />Kubernetes operator deploys a StatefulSet for each of the user<br />configured proxies (Tailscale Ingress, Tailscale Service, Connector). | | |
| `metrics` _[Metrics](#metrics)_ | Configuration for proxy metrics. Metrics are currently not supported<br />for egress proxies and for Ingress proxies that have been configured<br />with tailscale.com/experimental-forward-cluster-traffic-via-ingress<br />annotation. Note that the metrics are currently considered unstable<br />and will likely change in breaking ways in the future - we only<br />recommend that you use those for debugging purposes. | | |
| `tailscale` _[TailscaleConfig](#tailscaleconfig)_ | TailscaleConfig contains options to configure the tailscale-specific<br />parameters of proxies. | | |
| `useLetsEncryptStagingEnvironment` _boolean_ | Set UseLetsEncryptStagingEnvironment to true to issue TLS<br />certificates for any HTTPS endpoints exposed to the tailnet from<br />LetsEncrypt's staging environment.<br />https://letsencrypt.org/docs/staging-environment/<br />This setting only affects Tailscale Ingress resources.<br />By default Ingress TLS certificates are issued from LetsEncrypt's<br />production environment.<br />Changing this setting true -> false, will result in any<br />existing certs being re-issued from the production environment.<br />Changing this setting false (default) -> true, when certs have already<br />been provisioned from production environment will NOT result in certs<br />being re-issued from the staging environment before they need to be<br />renewed. | | |
#### ProxyClassStatus
@@ -599,7 +600,7 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `type` _[ProxyGroupType](#proxygrouptype)_ | Type of the ProxyGroup proxies. Supported types are egress and ingress.<br />Type is immutable once a ProxyGroup is created. | | Enum: [egress ingress] <br />Type: string <br /> |
| `type` _[ProxyGroupType](#proxygrouptype)_ | Type of the ProxyGroup proxies. Currently the only supported type is egress.<br />Type is immutable once a ProxyGroup is created. | | Enum: [egress ingress] <br />Type: string <br /> |
| `tags` _[Tags](#tags)_ | Tags that the Tailscale devices will be tagged with. Defaults to [tag:k8s].<br />If you specify custom tags here, make sure you also make the operator<br />an owner of these tags.<br />See https://tailscale.com/kb/1236/kubernetes-operator/#setting-up-the-kubernetes-operator.<br />Tags cannot be changed once a ProxyGroup device has been created.<br />Tag values must be in form ^tag:[a-zA-Z][a-zA-Z0-9-]*$. | | Pattern: `^tag:[a-zA-Z][a-zA-Z0-9-]*$` <br />Type: string <br /> |
| `replicas` _integer_ | Replicas specifies how many replicas to create the StatefulSet with.<br />Defaults to 2. | | Minimum: 0 <br /> |
| `hostnamePrefix` _[HostnamePrefix](#hostnameprefix)_ | HostnamePrefix is the hostname prefix to use for tailnet devices created<br />by the ProxyGroup. Each device will have the integer number from its<br />StatefulSet pod appended to this prefix to form the full hostname.<br />HostnamePrefix can contain lower case letters, numbers and dashes, it<br />must not start with a dash and must be between 1 and 62 characters long. | | Pattern: `^[a-z0-9][a-z0-9-]{0,61}$` <br />Type: string <br /> |

View File

@@ -66,6 +66,21 @@ type ProxyClassSpec struct {
// parameters of proxies.
// +optional
TailscaleConfig *TailscaleConfig `json:"tailscale,omitempty"`
// Set UseLetsEncryptStagingEnvironment to true to issue TLS
// certificates for any HTTPS endpoints exposed to the tailnet from
// LetsEncrypt's staging environment.
// https://letsencrypt.org/docs/staging-environment/
// This setting only affects Tailscale Ingress resources.
// By default Ingress TLS certificates are issued from LetsEncrypt's
// production environment.
// Changing this setting true -> false, will result in any
// existing certs being re-issued from the production environment.
// Changing this setting false (default) -> true, when certs have already
// been provisioned from production environment will NOT result in certs
// being re-issued from the staging environment before they need to be
// renewed.
// +optional
UseLetsEncryptStagingEnvironment bool `json:"useLetsEncryptStagingEnvironment,omitempty"`
}
type TailscaleConfig struct {

View File

@@ -48,7 +48,7 @@ type ProxyGroupList struct {
}
type ProxyGroupSpec struct {
// Type of the ProxyGroup proxies. Supported types are egress and ingress.
// Type of the ProxyGroup proxies. Currently the only supported type is egress.
// Type is immutable once a ProxyGroup is created.
// +kubebuilder:validation:XValidation:rule="self == oldSelf",message="ProxyGroup type is immutable"
Type ProxyGroupType `json:"type"`

View File

@@ -153,6 +153,14 @@ type Secret struct {
Data map[string][]byte `json:"data,omitempty"`
}
// SecretList is a list of Secret objects.
type SecretList struct {
TypeMeta `json:",inline"`
ObjectMeta `json:"metadata"`
Items []Secret `json:"items,omitempty"`
}
// Event contains a subset of fields from corev1.Event.
// https://github.com/kubernetes/api/blob/6cc44b8953ae704d6d9ec2adf32e7ae19199ea9f/core/v1/types.go#L7034
// It is copied here to avoid having to import kube libraries.

View File

@@ -60,6 +60,7 @@ func readFile(n string) ([]byte, error) {
// It expects to be run inside a cluster.
type Client interface {
GetSecret(context.Context, string) (*kubeapi.Secret, error)
ListSecrets(context.Context, map[string]string) (*kubeapi.SecretList, error)
UpdateSecret(context.Context, *kubeapi.Secret) error
CreateSecret(context.Context, *kubeapi.Secret) error
// Event attempts to ensure an event with the specified options associated with the Pod in which we are
@@ -248,21 +249,35 @@ func (c *client) newRequest(ctx context.Context, method, url string, in any) (*h
// GetSecret fetches the secret from the Kubernetes API.
func (c *client) GetSecret(ctx context.Context, name string) (*kubeapi.Secret, error) {
s := &kubeapi.Secret{Data: make(map[string][]byte)}
if err := c.kubeAPIRequest(ctx, "GET", c.resourceURL(name, TypeSecrets), nil, s); err != nil {
if err := c.kubeAPIRequest(ctx, "GET", c.resourceURL(name, TypeSecrets, ""), nil, s); err != nil {
return nil, err
}
return s, nil
}
// ListSecrets fetches the secret from the Kubernetes API.
func (c *client) ListSecrets(ctx context.Context, selector map[string]string) (*kubeapi.SecretList, error) {
sl := new(kubeapi.SecretList)
s := make([]string, 0, len(selector))
for key, val := range selector {
s = append(s, key+"="+url.QueryEscape(val))
}
ss := strings.Join(s, ",")
if err := c.kubeAPIRequest(ctx, "GET", c.resourceURL("", TypeSecrets, ss), nil, sl); err != nil {
return nil, err
}
return sl, nil
}
// CreateSecret creates a secret in the Kubernetes API.
func (c *client) CreateSecret(ctx context.Context, s *kubeapi.Secret) error {
s.Namespace = c.ns
return c.kubeAPIRequest(ctx, "POST", c.resourceURL("", TypeSecrets), s, nil)
return c.kubeAPIRequest(ctx, "POST", c.resourceURL("", TypeSecrets, ""), s, nil)
}
// UpdateSecret updates a secret in the Kubernetes API.
func (c *client) UpdateSecret(ctx context.Context, s *kubeapi.Secret) error {
return c.kubeAPIRequest(ctx, "PUT", c.resourceURL(s.Name, TypeSecrets), s, nil)
return c.kubeAPIRequest(ctx, "PUT", c.resourceURL(s.Name, TypeSecrets, ""), s, nil)
}
// JSONPatch is a JSON patch operation.
@@ -283,14 +298,14 @@ func (c *client) JSONPatchResource(ctx context.Context, name, typ string, patche
return fmt.Errorf("unsupported JSON patch operation: %q", p.Op)
}
}
return c.kubeAPIRequest(ctx, "PATCH", c.resourceURL(name, typ), patches, nil, setHeader("Content-Type", "application/json-patch+json"))
return c.kubeAPIRequest(ctx, "PATCH", c.resourceURL(name, typ, ""), patches, nil, setHeader("Content-Type", "application/json-patch+json"))
}
// StrategicMergePatchSecret updates a secret in the Kubernetes API using a
// strategic merge patch.
// If a fieldManager is provided, it will be used to track the patch.
func (c *client) StrategicMergePatchSecret(ctx context.Context, name string, s *kubeapi.Secret, fieldManager string) error {
surl := c.resourceURL(name, TypeSecrets)
surl := c.resourceURL(name, TypeSecrets, "")
if fieldManager != "" {
uv := url.Values{
"fieldManager": {fieldManager},
@@ -342,7 +357,7 @@ func (c *client) Event(ctx context.Context, typ, reason, msg string) error {
LastTimestamp: now,
Count: 1,
}
return c.kubeAPIRequest(ctx, "POST", c.resourceURL("", typeEvents), &ev, nil)
return c.kubeAPIRequest(ctx, "POST", c.resourceURL("", typeEvents, ""), &ev, nil)
}
// If the Event already exists, we patch its count and last timestamp. This ensures that when users run 'kubectl
// describe pod...', they see the event just once (but with a message of how many times it has appeared over
@@ -472,9 +487,13 @@ func (c *client) checkPermission(ctx context.Context, verb, typ, name string) (b
// resourceURL returns a URL that can be used to interact with the given resource type and, if name is not empty string,
// the named resource of that type.
// Note that this only works for core/v1 resource types.
func (c *client) resourceURL(name, typ string) string {
func (c *client) resourceURL(name, typ, sel string) string {
if name == "" {
return fmt.Sprintf("%s/api/v1/namespaces/%s/%s", c.url, c.ns, typ)
url := fmt.Sprintf("%s/api/v1/namespaces/%s/%s", c.url, c.ns, typ)
if sel != "" {
url += "?labelSelector=" + sel
}
return url
}
return fmt.Sprintf("%s/api/v1/namespaces/%s/%s/%s", c.url, c.ns, typ, name)
}
@@ -487,7 +506,7 @@ func (c *client) nameForEvent(reason string) string {
// getEvent fetches the event from the Kubernetes API.
func (c *client) getEvent(ctx context.Context, name string) (*kubeapi.Event, error) {
e := &kubeapi.Event{}
if err := c.kubeAPIRequest(ctx, "GET", c.resourceURL(name, typeEvents), nil, e); err != nil {
if err := c.kubeAPIRequest(ctx, "GET", c.resourceURL(name, typeEvents, ""), nil, e); err != nil {
return nil, err
}
return e, nil

View File

@@ -18,6 +18,7 @@ type FakeClient struct {
CreateSecretImpl func(context.Context, *kubeapi.Secret) error
UpdateSecretImpl func(context.Context, *kubeapi.Secret) error
JSONPatchResourceImpl func(context.Context, string, string, []JSONPatch) error
ListSecretsImpl func(context.Context, map[string]string) (*kubeapi.SecretList, error)
}
func (fc *FakeClient) CheckSecretPermissions(ctx context.Context, name string) (bool, bool, error) {
@@ -45,3 +46,9 @@ func (fc *FakeClient) UpdateSecret(ctx context.Context, secret *kubeapi.Secret)
func (fc *FakeClient) CreateSecret(ctx context.Context, secret *kubeapi.Secret) error {
return fc.CreateSecretImpl(ctx, secret)
}
func (fc *FakeClient) ListSecrets(ctx context.Context, selector map[string]string) (*kubeapi.SecretList, error) {
if fc.ListSecretsImpl != nil {
return fc.ListSecretsImpl(ctx, selector)
}
return nil, nil
}

View File

@@ -48,4 +48,7 @@ const (
PodIPv4Header string = "Pod-IPv4"
EgessServicesPreshutdownEP = "/internal-egress-services-preshutdown"
LabelManaged = "tailscale.com/managed"
LabelSecretType = "tailscale.com/secret-type" // "config", "state" "certs"
)

View File

@@ -29,7 +29,7 @@ Client][]. See also the dependencies in the [Tailscale CLI][].
- [github.com/djherbis/times](https://pkg.go.dev/github.com/djherbis/times) ([MIT](https://github.com/djherbis/times/blob/v1.6.0/LICENSE))
- [github.com/fxamacker/cbor/v2](https://pkg.go.dev/github.com/fxamacker/cbor/v2) ([MIT](https://github.com/fxamacker/cbor/blob/v2.7.0/LICENSE))
- [github.com/gaissmai/bart](https://pkg.go.dev/github.com/gaissmai/bart) ([MIT](https://github.com/gaissmai/bart/blob/v0.18.0/LICENSE))
- [github.com/go-json-experiment/json](https://pkg.go.dev/github.com/go-json-experiment/json) ([BSD-3-Clause](https://github.com/go-json-experiment/json/blob/6a9a0fde9288/LICENSE))
- [github.com/go-json-experiment/json](https://pkg.go.dev/github.com/go-json-experiment/json) ([BSD-3-Clause](https://github.com/go-json-experiment/json/blob/d3c622f1b874/LICENSE))
- [github.com/godbus/dbus/v5](https://pkg.go.dev/github.com/godbus/dbus/v5) ([BSD-2-Clause](https://github.com/godbus/dbus/blob/76236955d466/LICENSE))
- [github.com/golang/groupcache/lru](https://pkg.go.dev/github.com/golang/groupcache/lru) ([Apache-2.0](https://github.com/golang/groupcache/blob/41bb18bfe9da/LICENSE))
- [github.com/google/btree](https://pkg.go.dev/github.com/google/btree) ([Apache-2.0](https://github.com/google/btree/blob/v1.1.2/LICENSE))
@@ -64,11 +64,11 @@ Client][]. See also the dependencies in the [Tailscale CLI][].
- [go4.org/mem](https://pkg.go.dev/go4.org/mem) ([Apache-2.0](https://github.com/go4org/mem/blob/ae6ca9944745/LICENSE))
- [go4.org/netipx](https://pkg.go.dev/go4.org/netipx) ([BSD-3-Clause](https://github.com/go4org/netipx/blob/fdeea329fbba/LICENSE))
- [go4.org/unsafe/assume-no-moving-gc](https://pkg.go.dev/go4.org/unsafe/assume-no-moving-gc) ([BSD-3-Clause](https://github.com/go4org/unsafe-assume-no-moving-gc/blob/e7c30c78aeb2/LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.33.0:LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.35.0:LICENSE))
- [golang.org/x/exp](https://pkg.go.dev/golang.org/x/exp) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/939b2ce7:LICENSE))
- [golang.org/x/mobile](https://pkg.go.dev/golang.org/x/mobile) ([BSD-3-Clause](https://cs.opensource.google/go/x/mobile/+/81131f64:LICENSE))
- [golang.org/x/mod/semver](https://pkg.go.dev/golang.org/x/mod/semver) ([BSD-3-Clause](https://cs.opensource.google/go/x/mod/+/v0.23.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.35.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.36.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.11.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.30.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.29.0:LICENSE))

View File

@@ -70,7 +70,7 @@ See also the dependencies in the [Tailscale CLI][].
- [go4.org/netipx](https://pkg.go.dev/go4.org/netipx) ([BSD-3-Clause](https://github.com/go4org/netipx/blob/fdeea329fbba/LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.35.0:LICENSE))
- [golang.org/x/exp](https://pkg.go.dev/golang.org/x/exp) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/939b2ce7:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.35.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.36.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.11.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.30.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.29.0:LICENSE))

View File

@@ -92,7 +92,7 @@ Some packages may only be included on certain architectures or operating systems
- [go4.org/netipx](https://pkg.go.dev/go4.org/netipx) ([BSD-3-Clause](https://github.com/go4org/netipx/blob/fdeea329fbba/LICENSE))
- [golang.org/x/crypto](https://pkg.go.dev/golang.org/x/crypto) ([BSD-3-Clause](https://cs.opensource.google/go/x/crypto/+/v0.35.0:LICENSE))
- [golang.org/x/exp](https://pkg.go.dev/golang.org/x/exp) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/939b2ce7:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.35.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.36.0:LICENSE))
- [golang.org/x/oauth2](https://pkg.go.dev/golang.org/x/oauth2) ([BSD-3-Clause](https://cs.opensource.google/go/x/oauth2/+/v0.26.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.11.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.30.0:LICENSE))

View File

@@ -62,7 +62,7 @@ Windows][]. See also the dependencies in the [Tailscale CLI][].
- [github.com/tailscale/go-winio](https://pkg.go.dev/github.com/tailscale/go-winio) ([MIT](https://github.com/tailscale/go-winio/blob/c4f33415bf55/LICENSE))
- [github.com/tailscale/hujson](https://pkg.go.dev/github.com/tailscale/hujson) ([BSD-3-Clause](https://github.com/tailscale/hujson/blob/20486734a56a/LICENSE))
- [github.com/tailscale/netlink](https://pkg.go.dev/github.com/tailscale/netlink) ([Apache-2.0](https://github.com/tailscale/netlink/blob/4d49adab4de7/LICENSE))
- [github.com/tailscale/walk](https://pkg.go.dev/github.com/tailscale/walk) ([BSD-3-Clause](https://github.com/tailscale/walk/blob/04068c1cab63/LICENSE))
- [github.com/tailscale/walk](https://pkg.go.dev/github.com/tailscale/walk) ([BSD-3-Clause](https://github.com/tailscale/walk/blob/b2c15a420186/LICENSE))
- [github.com/tailscale/win](https://pkg.go.dev/github.com/tailscale/win) ([BSD-3-Clause](https://github.com/tailscale/win/blob/5992cb43ca35/LICENSE))
- [github.com/tailscale/xnet/webdav](https://pkg.go.dev/github.com/tailscale/xnet/webdav) ([BSD-3-Clause](https://github.com/tailscale/xnet/blob/8497ac4dab2e/LICENSE))
- [github.com/tc-hib/winres](https://pkg.go.dev/github.com/tc-hib/winres) ([0BSD](https://github.com/tc-hib/winres/blob/v0.2.1/LICENSE))
@@ -74,7 +74,7 @@ Windows][]. See also the dependencies in the [Tailscale CLI][].
- [golang.org/x/exp](https://pkg.go.dev/golang.org/x/exp) ([BSD-3-Clause](https://cs.opensource.google/go/x/exp/+/939b2ce7:LICENSE))
- [golang.org/x/image/bmp](https://pkg.go.dev/golang.org/x/image/bmp) ([BSD-3-Clause](https://cs.opensource.google/go/x/image/+/v0.24.0:LICENSE))
- [golang.org/x/mod](https://pkg.go.dev/golang.org/x/mod) ([BSD-3-Clause](https://cs.opensource.google/go/x/mod/+/v0.23.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.35.0:LICENSE))
- [golang.org/x/net](https://pkg.go.dev/golang.org/x/net) ([BSD-3-Clause](https://cs.opensource.google/go/x/net/+/v0.36.0:LICENSE))
- [golang.org/x/sync](https://pkg.go.dev/golang.org/x/sync) ([BSD-3-Clause](https://cs.opensource.google/go/x/sync/+/v0.11.0:LICENSE))
- [golang.org/x/sys](https://pkg.go.dev/golang.org/x/sys) ([BSD-3-Clause](https://cs.opensource.google/go/x/sys/+/v0.30.0:LICENSE))
- [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) ([BSD-3-Clause](https://cs.opensource.google/go/x/term/+/v0.29.0:LICENSE))

View File

@@ -35,6 +35,9 @@ import (
var (
errFullQueue = errors.New("request queue full")
// ErrNoDNSConfig is returned by RecompileDNSConfig when the Manager
// has no existing DNS configuration.
ErrNoDNSConfig = errors.New("no DNS configuration")
)
// maxActiveQueries returns the maximal number of DNS requests that can
@@ -91,21 +94,18 @@ func NewManager(logf logger.Logf, oscfg OSConfigurator, health *health.Tracker,
}
// Rate limit our attempts to correct our DNS configuration.
// This is done on incoming queries, we don't want to spam it.
limiter := rate.NewLimiter(1.0/5.0, 1)
// This will recompile the DNS config, which in turn will requery the system
// DNS settings. The recovery func should triggered only when we are missing
// upstream nameservers and require them to forward a query.
m.resolver.SetMissingUpstreamRecovery(func() {
m.mu.Lock()
defer m.mu.Unlock()
if m.config == nil {
return
}
if limiter.Allow() {
m.logf("DNS resolution failed due to missing upstream nameservers. Recompiling DNS configuration.")
m.setLocked(*m.config)
m.logf("resolution failed due to missing upstream nameservers. Recompiling DNS configuration.")
if err := m.RecompileDNSConfig(); err != nil {
m.logf("config recompilation failed: %v", err)
}
}
})
@@ -117,6 +117,26 @@ func NewManager(logf logger.Logf, oscfg OSConfigurator, health *health.Tracker,
// Resolver returns the Manager's DNS Resolver.
func (m *Manager) Resolver() *resolver.Resolver { return m.resolver }
// RecompileDNSConfig sets the DNS config to the current value, which has
// the side effect of re-querying the OS's interface nameservers. This should be used
// on platforms where the interface nameservers can change. Darwin, for example,
// where the nameservers aren't always available when we process a major interface
// change event, or platforms where the nameservers may change while tunnel is up.
//
// This should be called if it is determined that [OSConfigurator.GetBaseConfig] may
// give a better or different result than when [Manager.Set] was last called. The
// logic for making that determination is up to the caller.
//
// It returns [ErrNoDNSConfig] if the [Manager] has no existing DNS configuration.
func (m *Manager) RecompileDNSConfig() error {
m.mu.Lock()
defer m.mu.Unlock()
if m.config == nil {
return ErrNoDNSConfig
}
return m.setLocked(*m.config)
}
func (m *Manager) Set(cfg Config) error {
m.mu.Lock()
defer m.mu.Unlock()

View File

@@ -29,6 +29,7 @@ import (
"tailscale.com/net/tsdial"
"tailscale.com/tstest"
"tailscale.com/types/dnstype"
"tailscale.com/util/eventbus"
)
func (rr resolverAndDelay) String() string {
@@ -454,7 +455,9 @@ func makeLargeResponse(tb testing.TB, domain string) (request, response []byte)
func runTestQuery(tb testing.TB, request []byte, modify func(*forwarder), ports ...uint16) ([]byte, error) {
logf := tstest.WhileTestRunningLogger(tb)
netMon, err := netmon.New(logf)
bus := eventbus.New()
defer bus.Close()
netMon, err := netmon.New(bus, logf)
if err != nil {
tb.Fatal(err)
}

View File

@@ -31,6 +31,7 @@ import (
"tailscale.com/types/dnstype"
"tailscale.com/types/logger"
"tailscale.com/util/dnsname"
"tailscale.com/util/eventbus"
)
var (
@@ -1059,7 +1060,10 @@ func TestForwardLinkSelection(t *testing.T) {
// routes differently.
specialIP := netaddr.IPv4(1, 2, 3, 4)
netMon, err := netmon.New(logger.WithPrefix(t.Logf, ".... netmon: "))
bus := eventbus.New()
defer bus.Close()
netMon, err := netmon.New(bus, logger.WithPrefix(t.Logf, ".... netmon: "))
if err != nil {
t.Fatal(err)
}

View File

@@ -15,6 +15,7 @@ import (
"tailscale.com/net/netmon"
"tailscale.com/tailcfg"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
func TestGetDERPMap(t *testing.T) {
@@ -185,7 +186,10 @@ func TestLookup(t *testing.T) {
logf, closeLogf := logger.LogfCloser(t.Logf)
defer closeLogf()
netMon, err := netmon.New(logf)
bus := eventbus.New()
defer bus.Close()
netMon, err := netmon.New(bus, logf)
if err != nil {
t.Fatal(err)
}

View File

@@ -13,7 +13,7 @@ import (
)
func TestGetState(t *testing.T) {
st, err := GetState()
st, err := getState("")
if err != nil {
t.Fatal(err)
}

View File

@@ -7,10 +7,14 @@ import (
"bytes"
"fmt"
"testing"
"tailscale.com/util/eventbus"
)
func TestLinkChangeLogLimiter(t *testing.T) {
mon, err := New(t.Logf)
bus := eventbus.New()
defer bus.Close()
mon, err := New(bus, t.Logf)
if err != nil {
t.Fatal(err)
}

View File

@@ -16,6 +16,7 @@ import (
"tailscale.com/types/logger"
"tailscale.com/util/clientmetric"
"tailscale.com/util/eventbus"
"tailscale.com/util/set"
)
@@ -50,7 +51,10 @@ type osMon interface {
// Monitor represents a monitoring instance.
type Monitor struct {
logf logger.Logf
logf logger.Logf
b *eventbus.Client
changed *eventbus.Publisher[*ChangeDelta]
om osMon // nil means not supported on this platform
change chan bool // send false to wake poller, true to also force ChangeDeltas be sent
stop chan struct{} // closed on Stop
@@ -114,21 +118,23 @@ type ChangeDelta struct {
// New instantiates and starts a monitoring instance.
// The returned monitor is inactive until it's started by the Start method.
// Use RegisterChangeCallback to get notified of network changes.
func New(logf logger.Logf) (*Monitor, error) {
func New(bus *eventbus.Bus, logf logger.Logf) (*Monitor, error) {
logf = logger.WithPrefix(logf, "monitor: ")
m := &Monitor{
logf: logf,
b: bus.Client("netmon"),
change: make(chan bool, 1),
stop: make(chan struct{}),
lastWall: wallTime(),
}
m.changed = eventbus.Publish[*ChangeDelta](m.b)
st, err := m.interfaceStateUncached()
if err != nil {
return nil, err
}
m.ifState = st
m.om, err = newOSMon(logf, m)
m.om, err = newOSMon(bus, logf, m)
if err != nil {
return nil, err
}
@@ -161,7 +167,7 @@ func (m *Monitor) InterfaceState() *State {
}
func (m *Monitor) interfaceStateUncached() (*State, error) {
return GetState()
return getState(m.tsIfName)
}
// SetTailscaleInterfaceName sets the name of the Tailscale interface. For
@@ -465,6 +471,7 @@ func (m *Monitor) handlePotentialChange(newState *State, forceCallbacks bool) {
if delta.TimeJumped {
metricChangeTimeJump.Add(1)
}
m.changed.Publish(delta)
for _, cb := range m.cbs {
go cb(delta)
}

View File

@@ -13,6 +13,7 @@ import (
"golang.org/x/sys/unix"
"tailscale.com/net/netaddr"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
const debugRouteMessages = false
@@ -24,7 +25,7 @@ type unspecifiedMessage struct{}
func (unspecifiedMessage) ignore() bool { return false }
func newOSMon(logf logger.Logf, _ *Monitor) (osMon, error) {
func newOSMon(_ *eventbus.Bus, logf logger.Logf, _ *Monitor) (osMon, error) {
fd, err := unix.Socket(unix.AF_ROUTE, unix.SOCK_RAW, 0)
if err != nil {
return nil, err

View File

@@ -10,6 +10,7 @@ import (
"strings"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
// unspecifiedMessage is a minimal message implementation that should not
@@ -24,7 +25,7 @@ type devdConn struct {
conn net.Conn
}
func newOSMon(logf logger.Logf, m *Monitor) (osMon, error) {
func newOSMon(_ *eventbus.Bus, logf logger.Logf, m *Monitor) (osMon, error) {
conn, err := net.Dial("unixpacket", "/var/run/devd.seqpacket.pipe")
if err != nil {
logf("devd dial error: %v, falling back to polling method", err)

View File

@@ -16,6 +16,7 @@ import (
"tailscale.com/envknob"
"tailscale.com/net/tsaddr"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
var debugNetlinkMessages = envknob.RegisterBool("TS_DEBUG_NETLINK")
@@ -27,15 +28,26 @@ type unspecifiedMessage struct{}
func (unspecifiedMessage) ignore() bool { return false }
// RuleDeleted reports that one of Tailscale's policy routing rules
// was deleted.
type RuleDeleted struct {
// Table is the table number that the deleted rule referenced.
Table uint8
// Priority is the lookup priority of the deleted rule.
Priority uint32
}
// nlConn wraps a *netlink.Conn and returns a monitor.Message
// instead of a netlink.Message. Currently, messages are discarded,
// but down the line, when messages trigger different logic depending
// on the type of event, this provides the capability of handling
// each architecture-specific message in a generic fashion.
type nlConn struct {
logf logger.Logf
conn *netlink.Conn
buffered []netlink.Message
busClient *eventbus.Client
rulesDeleted *eventbus.Publisher[RuleDeleted]
logf logger.Logf
conn *netlink.Conn
buffered []netlink.Message
// addrCache maps interface indices to a set of addresses, and is
// used to suppress duplicate RTM_NEWADDR messages. It is populated
@@ -44,7 +56,7 @@ type nlConn struct {
addrCache map[uint32]map[netip.Addr]bool
}
func newOSMon(logf logger.Logf, m *Monitor) (osMon, error) {
func newOSMon(bus *eventbus.Bus, logf logger.Logf, m *Monitor) (osMon, error) {
conn, err := netlink.Dial(unix.NETLINK_ROUTE, &netlink.Config{
// Routes get us most of the events of interest, but we need
// address as well to cover things like DHCP deciding to give
@@ -59,12 +71,22 @@ func newOSMon(logf logger.Logf, m *Monitor) (osMon, error) {
logf("monitor_linux: AF_NETLINK RTMGRP failed, falling back to polling")
return newPollingMon(logf, m)
}
return &nlConn{logf: logf, conn: conn, addrCache: make(map[uint32]map[netip.Addr]bool)}, nil
client := bus.Client("netmon-iprules")
return &nlConn{
busClient: client,
rulesDeleted: eventbus.Publish[RuleDeleted](client),
logf: logf,
conn: conn,
addrCache: make(map[uint32]map[netip.Addr]bool),
}, nil
}
func (c *nlConn) IsInterestingInterface(iface string) bool { return true }
func (c *nlConn) Close() error { return c.conn.Close() }
func (c *nlConn) Close() error {
c.busClient.Close()
return c.conn.Close()
}
func (c *nlConn) Receive() (message, error) {
if len(c.buffered) == 0 {
@@ -219,6 +241,10 @@ func (c *nlConn) Receive() (message, error) {
// On `ip -4 rule del pref 5210 table main`, logs:
// monitor: ip rule deleted: {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:254 Protocol:0 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:<nil> Gateway:<nil> OutIface:0 Priority:5210 Table:254 Mark:4294967295 Expires:<nil> Metrics:<nil> Multipath:[]}}
}
c.rulesDeleted.Publish(RuleDeleted{
Table: rmsg.Table,
Priority: rmsg.Attributes.Priority,
})
rdm := ipRuleDeletedMessage{
table: rmsg.Table,
priority: rmsg.Attributes.Priority,

View File

@@ -7,9 +7,10 @@ package netmon
import (
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
func newOSMon(logf logger.Logf, m *Monitor) (osMon, error) {
func newOSMon(_ *eventbus.Bus, logf logger.Logf, m *Monitor) (osMon, error) {
return newPollingMon(logf, m)
}

View File

@@ -11,11 +11,15 @@ import (
"testing"
"time"
"tailscale.com/util/eventbus"
"tailscale.com/util/mak"
)
func TestMonitorStartClose(t *testing.T) {
mon, err := New(t.Logf)
bus := eventbus.New()
defer bus.Close()
mon, err := New(bus, t.Logf)
if err != nil {
t.Fatal(err)
}
@@ -26,7 +30,10 @@ func TestMonitorStartClose(t *testing.T) {
}
func TestMonitorJustClose(t *testing.T) {
mon, err := New(t.Logf)
bus := eventbus.New()
defer bus.Close()
mon, err := New(bus, t.Logf)
if err != nil {
t.Fatal(err)
}
@@ -36,7 +43,10 @@ func TestMonitorJustClose(t *testing.T) {
}
func TestMonitorInjectEvent(t *testing.T) {
mon, err := New(t.Logf)
bus := eventbus.New()
defer bus.Close()
mon, err := New(bus, t.Logf)
if err != nil {
t.Fatal(err)
}
@@ -71,7 +81,11 @@ func TestMonitorMode(t *testing.T) {
default:
t.Skipf(`invalid --monitor value: must be "raw" or "callback"`)
}
mon, err := New(t.Logf)
bus := eventbus.New()
defer bus.Close()
mon, err := New(bus, t.Logf)
if err != nil {
t.Fatal(err)
}

View File

@@ -13,6 +13,7 @@ import (
"golang.zx2c4.com/wireguard/windows/tunnel/winipcfg"
"tailscale.com/net/tsaddr"
"tailscale.com/types/logger"
"tailscale.com/util/eventbus"
)
var (
@@ -45,7 +46,7 @@ type winMon struct {
noDeadlockTicker *time.Ticker
}
func newOSMon(logf logger.Logf, pm *Monitor) (osMon, error) {
func newOSMon(_ *eventbus.Bus, logf logger.Logf, pm *Monitor) (osMon, error) {
m := &winMon{
logf: logf,
isActive: pm.isActive,

View File

@@ -461,21 +461,22 @@ func isTailscaleInterface(name string, ips []netip.Prefix) bool {
// getPAC, if non-nil, returns the current PAC file URL.
var getPAC func() string
// GetState returns the state of all the current machine's network interfaces.
// getState returns the state of all the current machine's network interfaces.
//
// It does not set the returned State.IsExpensive. The caller can populate that.
//
// Deprecated: use netmon.Monitor.InterfaceState instead.
func GetState() (*State, error) {
// optTSInterfaceName is the name of the Tailscale interface, if known.
func getState(optTSInterfaceName string) (*State, error) {
s := &State{
InterfaceIPs: make(map[string][]netip.Prefix),
Interface: make(map[string]Interface),
}
if err := ForeachInterface(func(ni Interface, pfxs []netip.Prefix) {
isTSInterfaceName := optTSInterfaceName != "" && ni.Name == optTSInterfaceName
ifUp := ni.IsUp()
s.Interface[ni.Name] = ni
s.InterfaceIPs[ni.Name] = append(s.InterfaceIPs[ni.Name], pfxs...)
if !ifUp || isTailscaleInterface(ni.Name, pfxs) {
if !ifUp || isTSInterfaceName || isTailscaleInterface(ni.Name, pfxs) {
return
}
for _, pfx := range pfxs {
@@ -755,11 +756,12 @@ func DefaultRoute() (DefaultRouteDetails, error) {
// HasCGNATInterface reports whether there are any non-Tailscale interfaces that
// use a CGNAT IP range.
func HasCGNATInterface() (bool, error) {
func (m *Monitor) HasCGNATInterface() (bool, error) {
hasCGNATInterface := false
cgnatRange := tsaddr.CGNATRange()
err := ForeachInterface(func(i Interface, pfxs []netip.Prefix) {
if hasCGNATInterface || !i.IsUp() || isTailscaleInterface(i.Name, pfxs) {
isTSInterfaceName := m.tsIfName != "" && i.Name == m.tsIfName
if hasCGNATInterface || !i.IsUp() || isTSInterfaceName || isTailscaleInterface(i.Name, pfxs) {
return
}
for _, pfx := range pfxs {

Some files were not shown because too many files have changed in this diff Show More