But I also run a personal golink server in my homelab with some links that don’t really make sense to add to our corporate golink instance. I’d really like to be able to access my personal go links, even when I’m logged in to my work Tailscale profile. And it turns out, it’s incredibly simple to do.
Tailscale allows you to share devices to individuals on other tailnets.
You can control exactly what level of access those users have in your ACLs just like any other user.
For the share recipient, they see the device in their list of machines with a “shared in” label.
Because the device is in a different tailnet, they can’t use the MagicDNS short name to access it,
but they can still use the fully qualified host.tailnetXXXX.ts.net
address.
As I noted previously, I’ve also setup DNS so that I can access my devices on a custom domain.
So my personal golink server is at go.willnorris.net
, but still only accessible on my tailnet.
So once I’ve shared my golink server to my work account, I can access all of my personal go links using URLs like go.willnorris.net/deploy
.
But that’s still a lot of typing, and I’d like to have something a little closer to the convenience of the short go
hostname.
What I ended up doing is creating a chain of go links on our corporate go link server
which allows any employee to access their personal go links.
All go links have a short name and a destination URL.
The destination URL can actually use go templates to do dynamic resolution.
One of the variables that the template has access to is .User
,
which provides the username (typically an email address) of the user resolving the link.
So for example, we have a link named go/me
, which resolves as:
go/me => http://who/{{TrimSuffix .User "tailscale.com"}}
This will take the username of the person visiting go/me
,
trim off the “tailscale.com” from the end of their email address, and send them to our who
service.
So when I visit go/me
, it sends me to http://who/will@
, which shows my personal profile in our company directory.
(This was one of the go links I brought over from my time at Twitter.)
So back to accessing my personal go link server.
We have a very similarly named go link, go/my
, which resolves as:
go/my => /{{TrimSuffix .User "@tailscale.com"}}-go{{with .Path}}/{{.}}{{end}}
Let’s break this down:
{{TrimSuffix .User "@tailscale.com"}}
is almost identical to our go/me
link but it strips off the @
as well.
So when I visit this link, this portion will simply resolve to will
.
-go
means that we just add the literal string -go
, so now we have will-go
{{with .Path}}/{{.}}{{end}}
means that if I added an additional path, we’ll add a slash and then whatever path was specified.
So if I visited go/my/deploy
, then the deploy
would be the extra path that gets added to the end.
There’s one more thing to call out: this destination is a relative URL.
It doesn’t have a scheme or a host, it just starts with a /
.
That means that it gets resolved relative to the current host, which is http://go/
.
This is how you chain multiple go links together, and it’s actually important that you do it this way.
So if I visit http://go/my
, using the expansion explained above,
I would be sent to /will-go
, which then expands to the absolute URL http://go/will-go
.
So where does /will-go
resolve to? Well, to my personal go link server of course!
Any Tailscale employee can create a link named {user}-go
with their username, and point that at their personal golink server.
So for example, I have:
go/will-go => http://go.willnorris.net/
I don’t need to use any .Path
template variables, since golink will append any extra path by default.
And if I hadn’t setup a custom domain, this could just as easily be http://go.tailXXXX.ts.net
.
So now this means when I visit go/my/deploy
, it ends up resolving to http://go.willnorris.net/deploy
as you can see in this truncated curl output:
% curl -isL http://go/my/deploy
HTTP/1.1 302 Found
Location: /will-go/deploy
HTTP/1.1 302 Found
Location: http://go.willnorris.net/deploy
HTTP/1.1 302 Found
Location: https://github.com/willnorris/willnorris.com/actions/workflows/deploy.yml
This approach for accessing personal go links involved chaining multiple go links together to get to the final destination.
This is also commonly done to create alias go links.
For example, you might have go/bugs
that links to your bug tracker.
But you may also want to have go/b
, go/bug
, and go/issues
link there.
You could copy the same destination URL to all of the links, or you could just have the aliases link to the first.
go/bugs => http://bugtracker/
go/b => /bugs
go/bug => /bugs
go/issues => /bugs
Then, if you ever move your bug tracker, you only need to update the main go/bugs
link.
This is also helpful to do for go links that have common misspellings.
So imagine I had created an alias on my personal golink server for go/b
.
But instead of using a relative link /bugs
, I used the absolute URL http://go/bugs
.
Now what happens when I resolve that from my work account using go/my/b
?
% curl -isL http://go/my/b
HTTP/1.1 302 Found
Location: /will-go/b
HTTP/1.1 302 Found
Location: http://go.willnorris.net/b
HTTP/1.1 302 Found
Location: http://go/bugs
HTTP/1.1 302 Found
Location: http://bugs.corp.example.com
When I resolved http://go.willnorris.net/b
, it redirected to http://go/bugs
.
But because I’m logged into my company account, http://go/
points to my company golink server,
which then redirects http://go/bugs
to the company bug tracker, not my own.
Using relative links ensures that chained links are always resolved by the same server.
This is also helpful if you name your server something other than go
, or you decide to rename it at some point.
Finally, because I’ve gotten accustomed to using go/my
links for my personal links,
I’ve also setup a go/my
link on my personal golink server.
Since those links should just resolve locally, the destination URL is literally just a slash:
go/my => /
So now, if I use go/my/deploy
when I’m on my personal Tailscale account,
even though I could have just used go/deploy
, it still gets me there.
What’s particularly neat about this approach is that it didn’t require building anything extra. Device sharing, MagicDNS, user identity, and access controls are all just core features of Tailscale. They’re just building blocks you can use to build and access all kinds of services. And once I had those, it was just a matter of setting up a few go links.
]]>tailnetNNNN.ts.net
,
and so every device can be addressed as <device>.tailnetNNNN.ts.net
.
If you want, you can instead choose a fun tailnet name which is randomly picked
from a list of things with tails, and a list of things with scales.
So you might end up with something like <device>.orca-lizard.ts.net
.
While the fun tailnet names are cute and all, I really wanted to use my own domain. For quite a while, I just manually maintained DNS records for the handful of hosts I cared about. Tailscale IP addresses don’t change, so this wasn’t actually too much work. But I recently got around to switching to a new tailnet using my own domain with custom OIDC, which meant I needed to reregister all of my devices.
I decided to take this opportunity to try and sort out my DNS properly. What I found was coredns-tailscale, a plugin for coredns that effectively maps Tailscale device names onto a custom domain. The coredns-tailscale project has been around for about a year, and I later discovered that it had been mentioned in the Tailscale newsletter from October 2022. I guess I either missed seeing it or just wasn’t looking for a tool like that at the time.
When I started manually maintaining DNS records for my Tailscale devices,
I chose the zone ipn.willnorris.net
.
(IPN was the abbreviation for a Tailscale network before it was called a “tailnet”,
and is still present in parts of the code base.)
So I basically wanted to delegate the entire ipn.willnorris.net
zone to my coredns server.
I use Porkbun for domain registration and DNS hosting, so it was a simple matter of adding NS records.
I already knew I wanted to host coredns on Fly, so I created the Fly app and got a public IP address.
I didn’t have to, but I decided to go ahead and add names for my nameservers rather than bare IPs.
I cleverly chose ns1.ipn.willnorris.net
and ns2.ipn.willnorris.net
.
I added A records pointing each hostname to my Fly IP address, and
added NS records for ipn.willnorris.net
pointing to those two hosts.
ns1.ipn.willnorris.net. 600 IN A 37.16.12.98
ns2.ipn.willnorris.net. 600 IN A 37.16.12.98
ipn.willnorris.net. 600 IN NS ns1.ipn.willnorris.net.
ipn.willnorris.net. 600 IN NS ns2.ipn.willnorris.net.
I needed the coredns server to join my Tailnet (explained below), so I created an auth key for that purpose.
I made one that is reusable, ephemeral, pre-approved, and tagged with tag:dns
.
I also added an ACL entry to my policy file to make sure that all of the devices on my network can do DNS queries.
This same entry also causes the DNS server to be aware of all of the other devices on the network,
which is needed to populate its internal mappings.
{
"acls": [
{
"action": "accept",
"src": ["*"],
"dst": ["tag:dns:53"]
}
],
"tagOwners": {
"tag:dns": []
}
}
The source for my personal coredns server can be found at https://github.com/willnorris/ipn-dns.
There’s really not a whole lot to it.
My main.go simply registers the tailscale plugin and starts coredns.
My Dockerfile builds everything in a wolfi build image and copies the final binary and config to a static image.
(Don’t miss calling setcap cap_net_bind_service=+ep
so that you can listen on port 53).
My fly config is also pretty boring, adding a single volume mount for Tailscale state files and listening on port 53.
I also set my Tailscale auth key to the TS_AUTHKEY
secrets variable using fly secrets
.
The only interesting bit is the coredns config itself:
ipn.willnorris.net {
hosts {
# some resolvers will recheck the entries for DNS glue records at the delegate nameserver.
# Manually specify these hosts, since they won't appear in the Tailscale node list.
37.16.12.98 ns1.ipn.willnorris.net ns2.ipn.willnorris.net
fallthrough
}
tailscale ipn.willnorris.net {
authkey {$TS_AUTHKEY}
}
log
errors
}
I manually respecify records for my nameservers since some resolvers will check for that.
I then configure the coredns-tailscale plugin to use my ipn.willnorris.net
zone,
and register itself with my Tailscale auth key.
Now this auth key is the one really non-standard bit, and relies on a local change I made to coredns-tailscale.
Normally, it requires that a Tailscale client be running on the host system (the docker image in my case).
I added support for having coredns join the tailnet directly using tsnet,
so that everything can be self-contained in the single coredns binary, including the Tailscale client itself.
I also made another change to respond to tailnet changes more quickly.
If you want to try those changes out yourself, see the replace
directive in my go.mod.
Once deployed, you can see that DNS queries for my MagicDNS hostname and my custom hostname match.
Though in practice, I typically create a CNAME without the ipn
component
and use that for actually accessing services when I need to:
% dig +short go.tail27e07.ts.net
100.69.62.103
% dig +short go.ipn.willnorris.net
100.69.62.103
% dig +short go.willnorris.net
go.ipn.willnorris.net.
100.69.62.103
There are a few additional things that MagicDNS gets you that is missing here. First, MagicDNS also automatically sets up a DNS search path so that you can typically just use bare hostnames. This is what makes go links like go/meet work without needing the fully qualified domain name. You can also have Tailscale automatically get certificates for your ts.net hostname, even for private services that can’t typically get Let’s Encrypt certs using the HTTP challenge. This is possible because Tailscale uses the DNS challenge on the ts.net domain. And Tailscale serve and funnel build on top of this HTTPS support to make services available to your tailnet or even publicly on the internet. None of these things work with the custom DNS approach I’ve described here.
However, there are still reasons why you might want custom names as a supplement to your ts.net hostnames. I often share some devices between my personal and work tailnet. While bare hostnames work for devices in your own tailnet, they don’t work for shared devices. For that, you have to use the fully qualified hostname, and I can never remember (or want to type) my full ts.net name. If I want to access a personal go link while logged into my work tailnet, it’s much simpler to remember go.willnorris.net. (Actually, I have an even simpler method with go links I’ll talk about later.)
Or you may have existing hostnames that you’ve been using for a while and want to migrate them to a private Tailscale network. Or you’re possibly migrating from a different VPN product that was using a custom domain. Setting up a DNS server like this could help keep those old hostnames active with their new Tailscale IP addresses.
It’s also worth noting that I’m serving my custom DNS server publicly. That means anyone can poke around to discover my Tailscale device names as well as their Tailscale IPs. But those hostnames already end up getting written to public transparency logs whenever HTTPS certs are issued, so I’m not too worried about that. And Tailscale IP addresses themselves are generally pretty useless, though they do theoretically make certain types of attacks a little easier. So depending on the network setup and what you’re trying to do, you could just host this DNS server privately instead.
]]>I July 2014, I wrote Supporting WebFinger with Static Files and Nginx.
I still use Webfinger, now primarily for my custom Mastodon server and most recently with OpenID Connect for Tailscale.
My old nginx config required lua support to be compiled in, which wasn’t awful, but kind of annoying.
My Caddy configuration is mostly equivalent, though I didn’t bother to return
the proper 400
and 405
status codes on an incorrect resource parameter or HTTP method.
Instead, they just return a 404
which suits me just fine.
I define a named matcher that matches on the webfinger well-known URL, the HTTP methods I want to support, and one of several valid resource values. Then I rewrite the request to a static file like before and set some response headers.
@webfinger {
path /.well-known/webfinger
method GET HEAD
query resource=acct:will@willnorris.com
query resource=mailto:will@willnorris.com
query resource=https://willnorris.com
query resource=https://willnorris.com/
}
rewrite @webfinger /webfinger.json
header @webfinger {
Content-Type "application/jrd+json"
Access-Control-Allow-Origin "*"
X-Robots-Tag "noindex"
}
In August 2014, I wrote Proxying webmentions with nginx. I still proxy my webmentions to an external service, though I now use webmention.io. The config requires a tiny bit more work because my URL path didn’t match where I needed to send it, but it is still pretty straightforward.
Like before, I use a named matcher to match the relevant requests, then use Caddy’s reverse_proxy directive to send them to webmention.io.
@webmention {
method POST
path /api/webmention/
}
handle @webmention {
uri replace /api/webmention/ /willnorris.com/webmention
reverse_proxy https://webmention.io {
header_up Host {upstream_hostport}
}
}
In February 2015, I wrote Fetching Go Sub-Packages on Static Sites. Unsurprisingly, I still use my own domain in the import path of all of my go packages. I currently use Hugo to generate my site, so I have a custom layout for my go package files which reads relevant metadata from the page front matter and populates the necessary meta tags.
To serve the right page on go get
requests for sub-packages, the Caddy config is quite minimal.
A named matcher is used to match requests for go sub-packages that include the go-get
parameter,
and then serve the contents of the top-level go package file without the sub-package.
@gopkg {
path_regexp gopkg (/go/\w+/).+
query go-get=*
}
rewrite @gopkg {re.gopkg.1}
I’ve also done a lot more interesting things with custom Caddy modules like embedding my imageproxy service as well as a Tailscale node directly into the Caddy binary. But that will be a topic for another day.
]]>I was really into baseball as a kid, and so I really enjoyed the way some teams would overlay their city initials as their team logo. The most famous, of course, being the stacked “NY” for the New York Yankees (which apparently came from a Tiffany-designed NYPD medal), but also “SD” for the San Diego Padres, and “LA” for the Los Angeles Dodgers.
In more recent years, I’ve seen a few of these types of logos that really stood out. Probably the most memorable for me is Terry Mun’s “TM” logo using the negative space for the M. Terry also does some really tasteful animations with his logo and the rest of his site, but even the static logo is quite something.
I had also recently rediscovered some of Andy Bell’s CSS work (notably his method for managing flow), and was struck by the simplicity of his triangular “A” logo on his website. It certainly fits with the minimalist aesthetic of the rest of his site, and it inspired me to start doodling again.
I started with the same equilateral triangle, notched on one side using a second triangle one-third the base size. I added the notch on the top to form a “V”, then created a second one and combined them to form a “W”. Finally, I separated the left arm of the “W” to allow it to also be read as a slanted “N”.
I ended up with something that is most certainly inspired by Andy’s logo, but with some additional character I really like for combining the W and N. The final result also reminds me a bit of the Wonder Woman logo, which was not intentional but I’m kind of okay with. I certainly don’t need a personal logo, and it’s somewhat of a vanity project, but it was certainly fun to design and build.
]]>Earlier this year, I organized and ran the Pinewood Derby for my son’s Cub Scout Pack. I had always participated in the Pinewood Derby when I was a Scout, and attended a few as an adult, but I’d never actually organized or run one myself. This is an overview of the software I used, how I set it up, and how Tailscale brought it all together. (Disclaimer up front: I also work at Tailscale.)
The Pinewood Derby has been a favorite scouting event for over 70 years, with scouts designing, building, and racing a model car made from a block of pine wood. Awards are given for the fastest cars, but typically also for most creative designs. It’s no exaggeration to say that some kids stay in scouts just for the pinewood derby.
The complexity of a scout pack’s pinewood derby setup can vary pretty wildly. Our pack races on a 6 lane, 40 foot aluminum track from BestTrack with a Champ Timer. The timer interfaces with race management software specifically designed for these types of races that manages the racing brackets, records the times for each heat, and calculates the rankings.
In the past, our pack has used GrandPrix Race Manager which, as best as I can tell, is one of the more popular software options. However, this year I chose to instead use DerbyNet, which is an open source alternative. Besides being open source (which was great because I did make some small customizations), I also really liked how DerbyNet is architected. The application itself is a simple web application written in PHP with a SQLite database. System requirements are minimal and everything is managed through the browser, even interfacing with the race timer using the Web Serial API.
You do need somewhere to actually run the application, which can be on a local laptop, a raspberry pi, or on a remote cloud server. And you do need at least one client that can access that server to serve as the primary coordinator for the race. Then, any number of additional devices can be used in different roles, such as a checking in scouts, kiosks to display race results, or set up with a camera to provide instant replays.
We held our pinewood derby at the local fire station, which is a lot of fun because the kids get to hang out in the apparatus bay and look at the trucks. The battalion chief was very gracious and accommodating, but we weren’t completely sure whether we’d be able to use the station’s wireless network. I opted to run DerbyNet directly on my laptop so that in a worst case scenario, I could do everything locally from a single machine without any network connection. That ended up just being a local Caddy web server (after I gave up trying to run it in Docker), PHP to run DerbyNet, and connecting to our race timer over a USB-to-serial adapter.
The Caddy config was very simple:
:8080 {
root website
file_server {
index index.html index.php
}
php_fastcgi unix//opt/homebrew/var/run/php-fpm.sock {
env DERBYNET_CONFIG_DIR /var/lib/derbynet
env DERBYNET_DATA_DIR /var/lib/derbynet
}
}
Once we had that working, we added:
DerbyNet also allows parents to vote on the design awards, and we knew that was something we wanted to support if possible. If the network wasn’t cooperative, the leaders could always just select winners.
Unfortunately, there were indeed issues with fire station’s wifi, so we ended up having to tether all of the devices off of cell phone hotspots. My Macbook and Pixelbook were connected to my phone, the track manager had his iPad connected to his phone, and the volunteers and parents were on their individual phones. But we needed all of them to be able to reach the DerbyNet server running on my laptop; tethered to a phone.
Our device setup looked a little something like this:
So we have a dozen or so different devices on disparate networks that we need to all connect to each other. Fortunately, this is exactly what Tailscale is designed for: providing secure access between remote devices and resources. Of course, if I had all of these devices on the same tailnet, there wouldn’t really be much more to do. Every device would enable Tailscale and go to the MagicDNS hostname for the server. That’s actually what I did for the two devices of my own (the MacBook and Pixelbook), both of which had Tailscale installed and set up ahead of time. Because they were tethered on the same phone, Tailscale connected them directly over local IP addresses.
To provide access for the track manager, I used Tailscale Funnel to expose the DerbyNet server to the public internet. On my laptop, that was as simple as running:
$ tailscale serve https / http://127.0.0.1:8080
$ tailscale funnel 443 on
The track manager (who was tethered on a separate phone) was then able to navigate to my same MagicDNS hostname (something like https://derby.tailnet.ts.net) which routed through Tailscale’s public funnel servers and down to my laptop. It worked amazingly well, especially considering that Funnel was a very new feature at the time.
We ran the whole pinewood derby like this without even the slightest hiccup. For the parents, I could have simply had them go to the same MagicDNS hostname, but I wanted try something a little different and easier to remember. I set up a reverse proxy on derby.pack263.org to direct traffic to Tailscale Funnel, which in turn routed it to my laptop. It wasn’t really necessary, but the reverse proxy was so simple to do in Caddy (with automatic SSL cert provisioning and all):
derby.pack263.org {
reverse_proxy https://derby.tailnet.ts.net {
header_up Host {upstream_hostport}
}
}
So the parents were able to access the DerbyNet server running on my laptop from their phones to vote on cars. I found out later that we even had one scout that was sick at home and was refreshing the DerbyNet site to see the results as the races were happening.
The final traffic flow was something like this:
I technically did run the reverse proxy on a cloud VM that I had available, but otherwise everything was just vanilla Tailscale with nothing too exotic. And even the reverse proxy was just a nice to have, I could have just as easily set up a simple redirect.
To be perfectly honest, it did feel a little risky to be trying something with so many moving parts for my very first pinewood derby. But I really couldn’t have been happier with how stable it was and how it turned out, and now I can’t imagine doing an event like this any other way.
]]>password
, not admin
, and not hunter2
. Just an empty string.
When you get a screen prompting you for a password, don’t enter anything, just click ‘OK’.
Hopefully that will save future me (and maybe current you) an hour of headache.
(Update: Or at least this was my experience after resetting my RainMachine.
This article seems to suggest that the default password is in
fact admin
? So try both.)
This evening, I accidentally reset the password on my RainMachine Pro 16. I was trying to reboot it through the on-device screen, but the touch sensor misinterpreted my selection. And then I didn’t read the confirmation screen closely, and ended up resetting the password.
The next hour or so was spent trying to reset the password following the instructions provided by RainMachine. My RainMachine sits about 5 feet away from my network switch, so it is wired via ethernet. But the instructions seem to assume wifi, and certainly imply that this is the only way to reset the password. This is a lie. A bald-faced lie.
I don’t know whether to blame Android, iOS, or RainMachine for the abysmal experience that resulted in trying to re-configure the device over wifi. Android simply refused to configure the device and iOS insisted on trying to set up HomeKit, and then still refused to set things up properly. But it really doesn’t matter, because in the end it wasn’t necessary. Once the device is reset, you can simply login with an empty string, whether that’s over ethernet or wifi.
It turns out that this is mentioned on the documentation for the RainMachine Mini-8:
Leave the password field empty since the password for the RainMachine device has been erased.
But that’s not the version I have, and so I didn’t initially read that. Why they didn’t include this on the documentation for the Pro-16 is beyond me. Maybe they’ll update that. But in the meantime, hopefully this page will guide folks in the right direction.
]]>Over the last decade or so, many companies (especially in tech) have adopted a “No Jerks” policy. The idea is to have policies in place for dealing with workplace bullies and jerks, and to have safeguards in the recruiting and hiring pipeline to prevent them from being hired in the first place. Much of this wave of emphasizing workplace civility seems to have begun with Robert Sutton’s 2007 book, The No Asshole Rule and the numerous articles that followed. When I was at Google, this topic was emphasized in a 2013 internal memo from a senior executive that was (and still is) widely circulated around the company.
These kinds of articles and memos can serve as great introductions to understanding the impact bullies and jerks have on teams and the entire organization, and the more extreme examples of this behavior are often easy to spot. However, I found that some of the examples in Sutton’s book, while true reports of actual situations, were so extreme that they bordered on caricature. It became all too easy to dismiss them, feeling that my workplace doesn’t have any of those people.
But what I have found in my experience is that most people are not overtly jerks most of the time. People tend to be far more nuanced, and the behaviors that can lead to discord on a team are far more subtle, and often unintentional.
The risk of teams and organizations only focusing on avoiding negative behavior is that they will, at best, trend toward neutral behavior. It’s not enough to simply not be a jerk. We must strive to be intentionally positive.
Being intentionally positive is not something that happens by accident. It’s not something you stumble into, and I suspect that it does not come naturally to many people. By definition, being intentionally positive is a conscious and deliberate choice to behave in a particular way.
So what does it mean to be intentionally positive in the workplace? It’s really nothing profound or surprising. It’s about being respectful to coworkers, empathetic and thoughtful in communication, quick to correct and apologize when you make a mistake, and quick to forgive when others do. I’ve always worked in engineering organizations, so I’ll give two concrete examples from my experience there, but the lessons apply to any kind of work.
Any sufficiently complex software project is going to have bugs. The documentation is never as clear as you would like, and the tests are never as thorough. Decisions or policies that were made years ago might no longer make sense, and there may be no one left that even remembers how or why they were made in the first place. We nearly always work in imperfect situations, but the attitude we have toward that can have a big impact on a team.
Off-handed remarks about how poorly a piece of work has been done, or how short-sighted a decision was, sets the tone for how other’s work is evaluated. When done in a critical rather than constructive manner, it only serves to tear people down, even if they’re no longer around to hear it. In fact, it’s especially important if they’re no longer around, since this signals to the current team how they should expect their own work to be discussed after they’re gone.
There’s a concept in improvisational acting and comedy called “Yes, and…”. It’s a technique of accepting whatever idea or direction your partner gives you (the “yes”), and then building on that (the “and”). Even if the idea seems preposterous or isn’t where you were wanting to go. In business settings, this is often discussed in connection with brainstorming sessions, but I think it applies here as well because it’s really about attitude. If we’re dealing with legacy code or unclear policies, we accept whatever we have today, and then build on it. We may still end up changing it, even drastically, but it means we will do so with an intentionally positive attitude.
Bob Ross was famous for saying that “we don’t make mistakes, we have happy accidents”. What is that, if not applying “yes, and…” to his painting? Yes this thing happened, and we’re going to work with it and turn it into something positive.
I think one of the hardest skills in life, and one I’m not sure I’ll ever truly master, is effective communication. Especially as more people are working remotely, and conversations are split across a variety of communication channels, the opportunity for misunderstandings is ever increasing. Whether it’s a simple code or design review for a teammate, or answering the same customer question for the 100th time by a new-hire that doesn’t know any better, there is a huge opportunity to be intentionally positive in our communication.
In technical reviews in particular, I have found that short, to-the-point comments meant to be expedient can also be received as brusque, impatient, or dismissive. Projects like Conventional Comments suggest a structured way to prefix comments with additional context, but I’ve also found that simply responding in complete sentences often leads to clearer communication. Or asking questions rather than giving commands: “Have you considered X here?” rather than “do X here”. (I’ve also seen this backfire where such a Socratic method led to frustration with a reviewer that clearly seemed to have an opinion but just wouldn’t outright say it.)
When dealing with customer questions, we can start by being careful to fully understand their specific situation, since it may actually not be the same as those before them. And then we can be friendly, helpful, and understanding of their problem. This isn’t about platitudes or fake hospitality (that really drives me crazy); this is about genuine empathy and kindness.
While being intentionally positive with how we communicate toward others, it’s equally (if not more) important to take the same attitude when we’re on the receiving end. This is often described as “assuming good intent” or the principle of charity. If something that someone says could be interpreted multiple ways, assume that they meant it in the best possible interpretation. Even if you know that they didn’t, it can sometimes be effective to simply ignore that fact and respond as if they did.
Like any other skill, this will not come naturally for everyone (I know it often doesn’t for me). It may require conscious effort, and it may feel awkward at times. And that may mean that it takes longer to reply to an email or complete a code review because you have to spend extra time thinking about how you respond. That’s okay. As a manager, I’m willing to sacrifice velocity in order to improve team health because I know that we will only get better at it and it will come more naturally with practice.
The only way to build the kind of team that I would like to work on is to make deliberate decisions to be that team each day.
(I picked up the term “intentionally positive” from the IndieWeb Code of Conduct which begins, “IndieWeb is an intentionally positive community”, which I absolutely love and it’s always stuck with me. I did some digging, and it was added by Tantek Çelik in February 2013.)
]]>We were a Kraft household growing up, certainly eating our fair share of blue box mac and cheese. The steps to make it are quite simple: cook the pasta and drain out the water, put the pasta back in the pan, then stir in milk, butter, and the cheese packet. I must have done this hundreds of times throughout my childhood, never once questioning these instructions. Why would I? Of course Kraft of all companies knows how to make a pot of mac & cheese! I mean, they even tweeted out the instructions, lest you throw away the box and find yourself stranded:
It wasn’t until a few years into my marriage that I realized that I had been making mac and cheese wrong my entire life. Surely there are different ways to make mac and cheese, but is it really fair to say that the Kraft way is wrong? Yes. Yes, it is. What I saw my wife do was nothing less than life-altering. Okay, well at least I still think about it every time I make mac and cheese, some 10 years later.
You don’t mix the sauce ingredients into the pasta-filled pan like a monster! That is the way to clumpy, grainy, cheese-powder disaster. Instead, leave the pasta in the strainer and make the sauce in the empty pan first! Only once it is nice and smooth do you add the pasta back in so that you get a nice even coating. (In essence, you’re making a roux. Or at least you would be if the cheese packet has flour in it. I’m not sure if it does.)
It turns out that this is what Annie’s has instructed on their boxed mac and cheese all along:
On the evening that I achieved macaroni enlightenment, I was hesitant to research whether it was my own mother or Kraft that had led me astray as a child. I guess it was a small comfort to discover that it was indeed Kraft, though the betrayal I felt was palpable. Suffice it to say, we’re an Annie’s household now.
That said, I can’t fully explain what’s going on in this picture. I’m willing to accept that this was staged just for the picture, and that someone did not, in fact, have a serious lapse in macaroni-and-cheese judgement.
]]>After 10 years, 8 months, and a handful of days, today is my last day at Google. It’s surreal and bittersweet, but I’m really excited about what’s next. As I’m writing this, I’m sitting outside of Charlie’s, getting ready to go gather my personal belongings and turn in my badge to security.
I joined Google a lot younger, not yet married two years, and before we had our two boys (who just started preschool and Kindergarten!). I spent my first couple of years working on Google Buzz and then Google+, then starting a 20% project managing Google’s open source releases on GitHub. That turned out to require a lot more than just 20% time, so now eight years later in Google’s Open Source Programs Office, I’m leaving behind an amazing organization I’m so honored to have gotten to be a part of. And I’m so grateful to Chris DiBona for taking a bet on a no-name engineer, and giving me so many opportunities to help get me to where I am today.
I’ll be sticking around in open source, and will be starting the next adventure in a couple of weeks. For now, I’m enjoying my final walk around a very empty Googleplex, recalling some great memories, and trying to fully take in how amazing this ride has been. I will miss this place and the phenomenal people I got to work with.
]]>Summer is a great time of year to travel with our loved ones. We’re excited to announce that SO/Family Track tickets are now available for significant others and kids who’ll be in Denver July 11-13th! Please register each person regardless of age so we’ll know how many people to expect.
This will be a great way to meet new people and explore the area while attendees take part in the third annual Gophercon. We’ve got activities planned Monday, Tuesday, and Wednesday, plus a rendezvous room at the convention center for afternoon chill time. We’ll take a tour of the Denver Art Museum and have free time for exploring the city on Monday, then make an excursion to the gorgeous country outside of Denver on Tuesday. Wednesday we’ll have a chance to tinker during Hack Day. We’ll also have family-friendly meetups in the evenings.
This is amazingly cool! I don’t think I’ve ever seen a conference that specifically planned something like this for families.
]]>