Setting Up WordPress Multisite with Subdomains and a Wildcard Let’s Encrypt Certificate on NGINX

Recently I found myself needing to move an existing WordPress Multisite installation off of a popular shared host. The main goal was to improve the site’s performance (load speed, etc.) and we have more ability to fine-tune things in an environment we fully control.

But it’s been awhile since I tinkered with Multisite and so I didn’t have a current set of “best practices” around how to set configure the nginx server block to handle subdomains that might be set up on the fly any time the site’s owner wants to add a new “site” to the network.

More importantly, we’ve switched to Let’s Encrypt as our provider for TLS certificates, and when we initially did so, they weren’t yet handling wildcard certificates. They added this capability some time ago now, but this was my first excuse to try it out.

So the goal was: configure nginx and Let’s Encrypt to properly handle any new subdomains added to the WordPess install without having to manually change the server configuration.

Quick Overview of the Tech Involved

We moved the site to a VPS on a Digital Ocean droplet with a LEMP stack with:

  • Ubuntu 20.04
  • nginx 1.18.0
  • MySQL 8.0.22
  • PHP 7.4

The other sites hosted on this droplet are mostly standard WordPress sites where the www subdomain is redirected to the domain name, so they use a more or less standard nginx server block that we’ve fine-tuned over time.

We’re obtaining TLS certificates from Let’s Encrypt using certbot and the --nginx flag to manage the certificate installation process.

First: nginx Config for WordPress Multisite

Our “standard” nginx server block works really well for WordPress and we get great performance out of a socket connection to php-fpm.

Our nginx rewrite rules, however, don’t anticipate the need to handle multiple subdomains that are subject to change over time.

Thankfully, there’s an nginx recipe for WordPress multisite that we were able to use as a starting point. The important thing to remember is that WordPress can be configured to add new sites as subdirectories (e.g. example.com/mysite) or as subdomains (e.g. mysite.example.com). We’re using the subdomain method, and so this recipe was the one we needed.

The recipe calls for a new section before the server block in the file located at /etc/nginx/sites-available/domain.com that leverages the nginx map module to set up a variable to handle the various subdomains.

map $http_host $blogid {
	default       -999;
}

Then inside the server block itself (i.e. between the server { and } for the domain in question), we add some lines that call those variables:

	#WPMU Files
	location ~ ^/files/(.*)$ {
		try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
		access_log off; log_not_found off;      expires max;
	}

and

	#WPMU x-sendfile to avoid php readfile()
	location ^~ /blogs.dir {
		internal;
		alias /var/www/example.com/html/wp-content/blogs.dir;
		access_log off;     log_not_found off;      expires max;
	}

Aside from those additions, we’re using our standard set of parameters for a typical WordPress installation.

Next: Get a Wildcard Certificate from Let’s Encrypt

Our typical method is to call certbot with the --nginx flag and let it put its ACME protocol token in the /.well-known subfolder to handle domain validation. This method, known as the HTTP-01 challenge, works well when we’re requesting issuance of a cert for the domain itself and for the www subdomain.

That request typically looks something like this:

sudo certbot --nginx -d example.com -d www.example.com

But we cannot use the HTTP-01 challenge to request a wildcard cert.

What is a wildcard TLS certificate?

Requests like the one shown above result in the issuance of that is valid for exactly 2 domains: example.com and its subdomain www.example.com.

Thus, when a visitor’s web browser connects to the server and requests a URL containing one or the other of those addresses, the server can legitimately negotiate a TLS connection and encrypt the traffic for it.

But since we’re setting up WordPress as a multisite installation in order to allow the site owner to create new sites on the fly, we aren’t able to predict all of the subdomains that need to be listed on the TLS certificate.

What we need instead is a TLS certificate that is valid for the domain itself (i.e. example.com) and for any subdomain of the domain. Thus, we want to request a certificate using a wildcard to represent the subdomain. In this case, the asterisk character serves as the wildcard, and so we want the cert to be valid for: example.com and all possible subdomains *.example.com.

The problem is that Let’s Encrypt does not permit the issuance of wildcard certificates using the HTTP-01 challenge. Instead, we need to make use of the DNS-01 Challenge.

Configuring the Let’s Encrypt DNS-01 Challenge on the Digital Ocean Platform

The DNS-01 Challenge requires that you prove that you have control over DNS for the domain rather than just a web server for the domain. It works by setting a TXT record for the domain at _acme-challenge.example.com which contains the ACME protocol token as its value.

As you might imagine, having to create this record manually and then update it every 90 days when Let’s Encrypt needs to renew the certificate would be a painful manual process.

Thankfully, there are DNS plugins for certbot which help automate the process as long as DNS is hosted by one of the compatible providers. Currently, that list includes: Cloudflare, CloudXNS, Digital Ocean, DNSimple, DNS Made Easy, Google, Linode, LuaDNS, NS1, OVH, Route53 (from Amazon Web Services), and any DNS provider that supports dynamic updates as described in RFC 2136.

It was a happy accident that I had decided to use Digital Ocean to host the DNS for this domain. I did it without realizing that I needed this kind of compatibility. So I was pleased to discover that Digital Ocean supports DNS updates via its API and that there’s a certbot plugin for their platform: dns_digitalocean.

I found some of the documentation around getting this plugin installed on my server a little confusing. One recommendation involved using pip3 (the Python 3.x package manager) to install it. But since I had installed certbot from the Ubuntu standard PPAs using the apt package manager, the version of the plugin that I got using pip3 wasn’t actually connected to the certbot installation I was using.

Ultimately, I realized I could install the plugin I needed using apt like this:

sudo apt install python3-certbot-dns-digitalocean

To fully configure it, I got a shiny new personal access token for the Digital Ocean API from the Applications & API page of my Digital Ocean account.

Then, I created a new file at /home/myloginusername/.secrets/certbot/digitalocean.ini that looked like this example from the plugin documentation:

# DigitalOcean API credentials used by Certbot
dns_digitalocean_token = 0000111122223333444455556666777788889999aaaabbbbccccddddeeeeffff

Note: in case this isn’t abundantly obvious, the token shown above is fake and will need to be replaced by a real token that is unique to you and should be treated as if it’s the password to your Digital Ocean account… because anyone with your API token has access to all of the capabilities of their API.

Also, one potential point of confusion I ran across. Since I use sudo to run certbot with elevated privileges, I thought perhaps this file should be located in the root user’s home folder (e.g. home/root/.secrets/...), but this turned out to be incorrect. It belongs in the home folder for the user that you authenticate with when you log in to Ubuntu.

Also, chmod that file to 0600 to help keep it safe:

chmod 0600 /home/myloginusername/.secrets/certbot/digitalocean.ini

You shouldn’t need sudo for that command since it’s in your home folder.

With the certbot dns plugin for your dns provider successfully installed and configured properly, you’re ready to request the cert.

In my case, I wanted to use the dns-digitalocean plugin to handle the authentication part of the certificate issuance, but I still wanted to use the nginx plugin to handle the installation of the certificate. This would greatly simplify ongoing maintenance tasks because I’d used the nginx plugin to handle installation of the other certs on this server.

Thankfully, it’s possible to combine certbot plugins to do exactly this by using the --installer flag with “nginx” as its value.

The command I used ended up looking something like this:

sudo certbot \
  --dns-digitalocean \
  --dns-digitalocean-credentials ~/.secrets/certbot/digitalocean.ini \
  --dns-digitalocean-propagation-seconds 60 \
  --installer nginx \
  -d example.com \
  -d *.example.com

Bascially, the command tells certbot to create an ACME protocol token, create (or update) the TXT record for this domain using the Digital Ocean API so that the record’s value matches the ACME token, then wait 60 seconds to give DNS a little time to propagate, and then run the DNS-01 challenge and issue/install the cert.

Your Mileage May Vary

Obviously, different server configurations and hosting environments will work differently, but if you happen to be running a VPS with a LEMP stack based on Ubuntu 20.04 and need WordPress Multisite to work with wildcard subdomains and a wildcard TLS certificate from Let’s Encrypt, then this process will generally be workable.

What questions do you have? I hope you found this useful. It’s always great to hear about it either here (feel free to comment below), or you can hit me up on Twitter: @TheDavidJohnson.

Cheers!

Image credit: Fikret tozak on Unsplash

Testing Out Collaborative Editing in Google Docs for WordPress

OK it’s not every day that I get super excited about new WordPress features. But today, Matt announced something that made me jump out of my chair and yell for joy.

What is it?

Google Docs integration for WordPress

The idea is that you create your content in Google Docs, using all of the lovely collaborative features like multiple (even simultaneous!) authors, commenting, great editing tools, cloud-based storage, and so forth.

Then… once it’s ready to go, push a button and voila! — the content shows up in your WordPress site.

The magic happens thanks to Jetpack, which we users of the WordPress software use to connect up our self-hosted sites to Automattic’s WordPress.com infrastructure.

So… you need to have the Jetpack plugin enabled and your site connected.

Then you need to use the WordPress.com for Google Docs add-in (that link goes to the Google Web Store page for the add-in, but you can also get it by going to “Add-ons” inside a Google Doc).

As much as I love the WordPress editor, this is a game changer. I live in Google Docs, especially since I acquired my first Chromebook about a year ago.

There’s one more hiccup. The authentication passes through multiple layers (after all, you wouldn’t want just anyone editing a Google Doc to be able to push content to your website, would you?):

  1. Your Google Account (make sure you’re signed in to the one you want)
  2. Your WordPress.com account — meaning the account that you used to connect your self-hosted WordPress site up to the Jetpack/WordPress.com infrastructure. (Here again: make you’re signed in to the right one!)
  3. Your local WordPress account (meaning the account that you sign in to your actual WordPress site with)

It was at that last authentication step that I hit a snag:

I had never activated the Jetpack JSON API on this site. So… I had to go through the Authorization process one more time after fixing that.

But hey! Needing to screenshot an error message gave me a chance to see how images work in this whole process. I’ll let you know once this content gets pushed to my WordPress site!

Update

After hitting the “Save Draft” button, my content got magically pushed to this site. (If you hadn’t figured it out, I wrote the first draft of this in Google Docs!)

The image came along with it!

But…. my cropping didn’t. The image above is the full screenshot. In Google Docs, I had cropped it to get rid of the 37 Chrome tabs and so forth (hyperbole, I know, but that’s only one of my 3 current Chrome windows!).

All in all, this is a fantastic experience. There’s even a button in Google Docs to “Preview” the post on the live site, and of course a way to update the draft from Google Docs.

I’m guessing you’ll have to manage your own workflow for which version is the latest. I assume if I make changes on my site, but then hit the “Update Draft” button in Google Docs, that version will overwrite whatever is on the site. But this is to be expected. (And I haven’t tested it yet, so… who knows?)

Way to go, team WordPress!!

Join me at WordCamp Miami 2017

Now that it’s been officially announced, I’m excited to invite you to join me for a discussion about “Getting Real Business Results from Your Content Marketing Efforts” at WordCamp Miami!

The event runs March 24-26 (Friday through Sunday) at Florida International University in Miami. The Miami gathering is one of the longest-running and most well-respected events in the WordCamp series, and it’s an honor to be invited to participate.

Last year, the weekend was outstanding, and my lovely wife, Jill, and I are truly looking forward to another spectacular time in South Florida!

WordCamp Miami 2016: Day Two

My wonderful, gorgeous wife, Jill, and I arrived on campus at Florida International University for day 2 of WordCamp Miami 2016… just in time to enjoy another round of bagels & coffee from Einstein Brothers Bagels.

After the opening remarks, we got our dose of Cain & Obenland in the Morning, which was a riot.

A highlight of the “Morning Show” was when they brought in Mark Jaquith for an interview.

Their final segment on WordPress news was fun. Some of the tidbits they shared about what’s happening with WordPress Core were exciting, including the fact that we’ll soon be saying goodbye to the “Bleak Screen of Sadnessâ„¢”

Jill and I stayed together for the first session of the morning, and we caught “Bootstrapping Your WordPress Business – Going from 0 to 10 Employees” with Scott Mann, who runs Highforge, an agency in Central Florida. Scott started with a compelling story about smoke jumper Wagner “Wag” Dodge and a famous firefighting incident at Mann Gulch which resulted in an on-the-spot innovation that continues to be used by firefighters today.

The point: when you’re bootstrapping your business, you’ll probably need to keep replacing your straps, because they’re going to get burned off!

Scott’s session ran the gamut from tools you can use as you bootstrap to finding and hiring the right talent and even when and how to raise your rates. Very practical. If you own a business and you’re bootstrapping and trying to grow, check out his slides or catch the replay if you can.

Next, Jill headed off to the “All Users” track, and I stuck around for “Product Marketing Tips for Commercial Plugins” with Chris Lema.  While he was specifically focused on developers who are selling premium WordPress plugins, his actual talk contained a ton of useful tactics for any business.

 

The Afternoon

The Business track that the organizers put together for today has turned out to be utterly fantastic.

A very pleasant surprise was the panel discussion which featured Brett Cohen, co-founder of emagine, Karim Marucchi, CEO of Crowd Favorite Andrew Norcross, founder of Reaktiv Studios, and Kimberly Lipari, COO, Valet.  The listed topic was, “How to Scale Your Business,” and the discussion was incredibly real and authentic. Most of all, it was really valuable.

 

WordCamp Miami 2016: Day One

My amazing wife & business partner, @GracefulJill, and I arrived on campus at FIU today just in time to get a great parking spot and jump in the registration line.

Right away, the #WCMIA team showed that they had done a great job getting things organized—the registration line ran smoothly, and we got some great event swag.

After visiting some of the sponsors’ tables, we staked out a couple of seats for the opening remarks session

We planned to divide & conquer, but ended up both catching the session “How to Keep a Client Happy” by Christina Siegler on the Content & Design track.

After that session, I snuck over to the Development track to hear a couple of more technical sessions, and Jill stayed for more Content & Design goodness. She spoke very highly of the session with Michelle Schulp on “Becoming The Client Your Developer Loves”—so much so that I’m planning to catch the recording.

In “Writing Multilingual Plugins and Themes,” John Bloch didn’t shy away from tech issues, and he dug right into code samples while explaining the concepts around internationalization (“I18N” for short).

Then I caught Chris Wiegman, whom I’ve gotten somewhat acquainted with since he relocated to paradise Sarasota a little over a year ago. He’s known as an expert in WordPress security, and his “Application Security For WordPress Developers” was entertaining, informative, and thorough… not to mention somewhat over my head in spots.

On my way to the Development track, I bumped into Pam Blizzard, one of the organizers of the WordPress community in Sarasota.

Pam Blizzard, a valuable member of the Sarasota WordPress community

I’ll try to come back and fill in more about our experience as time permits!

The Afternoon

There was an authentic, vulnerable talk on getting the most out of the WordPress community from Marc Gratch. He shared some very personal experiences (that I’m sure many of us can identify with) about working alone & working remotely, and how the amazing WordPress community can be a great support system.

His “give more than you get” approach was fantastic, and true to form, he gave a great of resources he’s built over time:

Then a fast-paced session on building a 6-figure email list with Syed Balkhi, creator of Opt-In Monster, WPBeginner, and many other sites & tools.

Nile Flores did a thorough, informative session on Yoast SEO, but managed to cover quite a bit of “SEO basics” ground in the process. This session should be mandatory for site owners who are new to how Google’s search results work and need a nice overview.

Then I caught up with Jill and we got some great lessons from Dr. Anthony Miyazaki about what is an acceptable number of times to dip your chip into the guacamole. He showed how you have to plan ahead so that you have enough of your chip left to really maximize your dip.

Owning Your Own Content

One of the serious considerations of our time is the need to store and have reasonably usable access to all the digital media we are creating.

How often do we snap a photo and upload straight from our mobile devices to services like Instagram and Facebook?

How easy is it, using the apps on our phones, to bang out a tweet or a status update?

But have you ever given any thought to what might happen if those sites disappeared? How much of your personal life is recorded there?

Consider my own situation.

I joined Facebook in 2008, coming up on 8 years ago now, and have had countless meaningful interactions there with people I care about (let’s set aside all the less meaningful interactions for the moment).

In that time, I’ve been through maybe 6 or 7 smartphones. I’ve snapped thousands of photos, many of which I have no idea where to find at the moment*, but some of which I have uploaded to sites like Facebook, Twitter, and various iterations of what is now Google Photos.

Unlike in decades past, today we simply don’t “print” the photos we take (I can’t think of a good reason why I would, frankly), but this means that we also don’t give much consideration to what happens to those photos—not to mention our personal interactions and communications, and even stuff we upload to the web or social networks—after the fact.

I don’t purport to have all the answers. In fact, my purposes in writing this post today are more around sparking some thought rather than speaking to specific solutions, which almost certainly will vary from person to person.

But if you treat your social media profiles like a de facto backup of some of your most treasured photos (like I have), and you’ve had meaningful interactions with others on social networks (like I have), then an important question needs to be raised:

What would you lose if one or more of these sites were to shut down?

This week, I spent a fair amount of time getting better acquainted with some of the principles established by the #Indieweb community. This is a group of people committed to the creation and viability of the “open web.”

The terminology around the “open web” is used to draw a distinction between the web that can and should be created and used by individuals, as opposed to the “corporate web,” which is centered around commercially driven services.

One of the goals of the movement is to keep the web open and free. This doesn’t exclude the usage of paid services—on the contrary, it’s clear that even users of the open web will need to pay for services like domain registration and web hosting (although there are, as I discovered this week, more free options for those items than I would’ve guessed).

In fact, the distinction between the “free and open” web and the “corporate” web isn’t so much one of payment, but rather of ownership, access to, and control over one’s own data.

To illustrate this, IndieWebCamp, one of the groups central to the #IndieWeb movement, maintains a list of “site deaths,” which are often free (but not always) services for users to write blogs and upload/store/share photos, among other things, but which have famously shut down over the years. Often, this leaves users with little or no opportunity to download the data they’ve stored on these services.

Examples? When Geocities shut down in 2009, something like 23 million pages disappeared from the web. Previously, AOL killed off AOL Hometown, removing more than 14 million sites from the web. Google has killed off a number of products, including Google Buzz, Google Reader (which personally affected me), Google Wave, and countless others.

In many cases, users had even paid for the services, but due to a variety of factors, such as:

  • lack of profitability
  • changes in ownership
  • mismanagement
  • shifts in direction, and even
  • loss of interest on the part of the owner(s)

…the services get shut down anyway.

There are a couple of tragic ramifications of these site deaths.

One is that often the people most harmed are the ones least knowledgeable about setting up and maintaining their own web presence.

Often the appeal of a free or inexpensive blogging platform (for example) is that one doesn’t need to gain any real know-how in order to use it.

While that’s great in terms of getting people to get started publishing on the web or otherwise using the web (which I’m certainly in favor of), it has often ultimately sucker-punched them by never creating an incentive (until it’s too late, of course) to gain the minimal amount of knowledge and experience they would need to maintain something for themselves.

Even when the users are given the opportunity to download their data, which is not always the case, these are the very people least likely to know how to make use of what they’ve downloaded.

Another tragic loss is for the web community at large. When a service of any significant size shuts down, often this results in the loss of tremendous amounts of information. Vanishing URLs means broken links throughout the parts of the web that remain, which makes the web less useful and more costly to maintain for us all.

Some of what is lost is of more value to the individuals that originally uploaded or published it than to the rest of us, of course. But even personal diaries and blogs that are not widely read contribute to our large-scale understanding of the zeitgeist of the times in which they were created, and that is something that could be preserved, and for which there is value to us from a societal perspective.

Geocities, as an example, has accurately been described as a veritable time capsule of the web as it was in the mid-1990s.

Maintaining Our Freedoms

At the risk of being accused of philosophizing here, I’d like to step away from the pragmatic considerations around the risk of losing content we’ve uploaded, and look for a moment at a more fundamental risk of loss: our freedom of speech.

The more we concentrate our online speech in “silos” controlled by others, the more risk we face that our freedoms will be suppressed.

It’s a simple truth that centralization tends toward control.

Consider this: according to Time, as of mid-2015 that American Facebook users spend nearly 40 minutes per day on the site.

According to a study published in April, 2015, a team of researchers found that the majority of Facebook users were not aware that their news feed was being filtered and controlled by Facebook. (More on this here.)

As a marketer, I’ve understood for many years that as a practical consideration, Facebook must have an algorithm in order to provide users with a decent experience.

But the question is, would Facebook ever intentionally manipulate that experience in order to engineer a particular outcome?

In fact, they would.

So… we’re spending an enormous amount of our time in an environment where most of the participants are unaware that what they see has been engineered for them. Furthermore, the audience for the content they post to the site is also then being manipulated.

Let me emphasize that it’s clear (to me, at least) that Facebook has to use an algorithm in order to provide the experience to their users that keeps them coming back every day. Most users don’t realize that a real-time feed of all the content published by the other Facebook users they’ve friended and followed, combined with content published by Pages they’ve liked, would actually be unenjoyable, if not entirely unusable.

But the logical consequence of this is that a single point of control has been created. Whether for good or for ill—or for completely benign purposes—control over who sees what we post exists. Furthermore, anyone is at risk of having their account shut down for violating (knowingly or unknowingly, intentionally or otherwise) a constantly-changing, complex terms of service.

So… even if you aren’t concerned about a service like Facebook shutting down, there remains the distinct possibility that you risk losing the content you’ve shared there anyway.

Includes “Freedom of Thought Ben Franklin” by k_donovan11 – Congressional Quote. Licensed under CC BY 2.0 via Wikimedia Commons

In other words, someone else controls—and may, in fact, own—what you’ve posted online.

What Can We Do?

All of this has strengthened my resolve to be committed to the practice of owning and maintaining my own data. It isn’t that I won’t use any commercial services or even the “silos” (like Facebook and Twitter) that are used by larger numbers of people, it’s just that I’m going to make an intentional effort to—where possible—use the principles adopted by the IndieWeb community and others in order to make sure that I create and maintain my own copies of the content I create and upload.

There are 2 principal means of carrying out this effort. One is POSSE: Publish on your Own Site, Syndicate Everywhere (or Elsewhere). This means that I’ll use platforms like Known in order to create content like Tweets and Facebook statuses, as often as practical, and then allow the content to be syndicated from there to Twitter and Facebook. I began tinkering with Known more than a year ago on the site social.thedavidjohnson.com.

As an example, here is a tweet I published recently about this very topic:

While it looks like any other tweet, the content actually originated here, where my personal archive of the content and the interactions is being permanently maintained. This works for Facebook, as well.

I’m making the decision now to gradually shift the bulk of my publishing on social networks to that site, which will mean sacrificing some convenience, as I’ll have to phase out some tools that I currently use to help me maintain a steady stream of tweets.

The payoff is that I’ll have my own permanent archive of my content.

In the event that I’m not able to find suitable ways to POSSE, I will begin to utilize the PESOS model: Publish Elsewhere, Syndicate to your Own Site.

Since some of the silos that I use don’t permit federation or syndication from other platforms, I’ll be pulling that content from the silo(s) in question back to my own site. An example is Instagram, for which inbound federation is currently difficult, but for which outbound syndication (back to my own site) is achievable.

Not as Hard as it Sounds

I am, admittedly, a geek. This makes me a bit more technically savvy than some people.

But… the truth of the matter is that this really isn’t hard to set up. The IndieWebCamp website provides an enormous wealth of information to help you get started using the principles of the IndieWeb community.

And it can begin with something as simple as grabbing a personal domain name and setting up a simple WordPress site, where if you use the self-hosted version I’ve linked to, you’ll have the ability to publish and syndicate your content using some simple plugins. Alternatively, you could use Known, which has POSSE capabilities (and many others) baked right in.

There are loads of resources on the web to help you take steps toward owning and controlling your own data.

Note: For those who live in or around Sarasota, if there’s enough interest, I’d be open to starting a local group (perhaps something of a Homebrew Website Club), to help facilitate getting people started on this journey. Respond in the comments below or hit me up on Twitter if you’re interested.

Personal Note of Gratitude

I’m indebted to a long series of leaders who have worked to create the open web and have personally influenced me over a number of years to get to where I am today in my thinking. There are many, but I’d like to personally thank a few who have had a greater direct impact on me personally. They are:

  • Matt Mullenweg, co-founder of WordPress. Matt helped me understand the important role of open source software, and although he didn’t invent the phrase, he personally (through his writings) introduced me to the idea of “free as in speech, not free as in beer.”
  • Kevin Marks, advocate for the open web whose tech career includes many of the giants (e.g. Google, Apple, Salesforce, and more). Kevin understands the technology, the ethical and societal implications of factors effecting the open web, and has taken on the responsibility of serving as a leader in many ways, including in the IndieWeb community.
  • Ben Werdmuller, co-founder of Known. Ben and his co-founder, Erin Jo Richey, have also stepped up as leaders, not only creating technology, but endeavoring to live out the principles of the open web.
  • Leo Laporte, founder of TWiT. As a broadcaster, podcaster, and tech journalist, Leo was instrumental in introducing me to people like Kevin Marks and Ben Werdmuller by creating and providing a platform for concepts like these to be discussed.

As I said, there are plenty more I could mention. In today’s world of the internet, we all owe an incredible debt of gratitude to many who have worked tirelessly and often selflessly to create one of the greatest platforms for free speech in all of history. Their legacy is invaluable, but is now entrusted to us.

Let’s not screw it up.


*I’ve got most of them. They’re stored on a series of hard drives and are largely uncatalogued and cumbersome to access. Obviously, I need to do something about that.

More Details About a WordPress Attack Making the Rounds

Since the same type of attack has hit my websites on a second web host, I want to provide some more details about the attack I recently experienced prior to writing about why you need to update WordPress and your plugins.

Yesterday, I logged in via FTP to a separate hosting account on a completely different web host, and found some of the same signs that accompanied the original attack on my 1and1 account.

The first sign is a suspicious file in the root of the website. The filename is “.. ” — as in ‘dot dot space’

This is particularly insidious, because the filename is designed to make the file hard to find. This is because “..” by itself is a unix/linux standard for “parent directory.” (It’s the same way on Windows & DOS systems as well.)

Thus, if you aren’t paying attention and looking specifically for it, it’s hard to notice. Also, since most systems don’t give you any sign of the “space” in the filename, it’s hard to open the file. (Here’s where I have to give credit to a sysadmin at 1and1 for helping me discover the space in the filename. I kept telling him it was called “..” and he said, “that’s impossible.” He was right.)

Either way, I have found that you can simply rename the file and then download it via FTP to open it up and see what’s inside. Here’s the code inside the “.. ” file:

This is obfuscated somehow… perhaps encoded with base64 or some other method.

I’m not certain what it does, but my guess is that it only works when in combination with the code that was inserted into PHP files. Here are the filenames targeted by the attack:

  • wp-config.php
  • index.php
  • header.php

While index.php & header.php are common filenames in a wide variety of php websites, wp-config.php is unique to WordPress. Thus, I’m fairly certain that the creators of this attack were particularly interested in attacking WordPress sites.

The wp-config.php file only shows up in the “root” folder of any given WordPress installation. On the other hand, index.php appears in a number of folders in a typical WordPress installation. Here are a few examples:

  • the “root” folder of the site
  • the wp-admin folder
  • wp-content folder
  • wp-content/themes
  • wp-content/plugins
  • wp-content/uploads
  • the main folder of any given theme
  • the main folder of some plugins

The header.php file, on the other hand, is most likely to show up in one or more of your theme folders.

My guess is that whatever script gets uploaded to your server gets busy locating files that match those filenames and injecting the malicious code.

The code is intended to be hard to spot. First of all, the PHP files are edited without modifying their timestamps. Thus, they don’t look like they’ve been edited recently.

Also, the code contains an opening <?php tag, and then is immediately followed by 1183 spaces. This means that even if you open an infected file in a typical code or text editor, the malicious code will be so far off your screen that you won’t notice it. You can scroll down and see all of the untouched PHP code that you’re expecting to see in the file.

From being attacked in the past, I was already aware of both of those techniques, so I opened the files and scrolled all the way to the right, finding the code.

Here’s an exact copy of what’s being inserted into these files.

What Does This Code Do?

Well… the only reference to this particular attack that I’ve been able to find online is found in this thread (in German). That confirmed a suspicion I had held which led me to believe that there was something inserting some ad code into the WordPress admin pages (the “Dashboard” specifically) of my sites. Thus, it is only visible when logged in as an admin user, and is intentionally targeting WordPress site operators.

1and1 insisted that my sites were injecting malware into visitors’ browsers. Perhaps this is the malware. Perhaps the code was doing more than just displaying the ads I saw.

In any case, I had originally attributed these ads to a recently-added Chrome extension which I immediately disabled.

Now that I’ve seen the German thread, I’m more convinced that the sites which were displaying that ad were, in fact, the ones infected with this malicious attack.

So… I have no proof as to what this code actually does. It’s all obfuscated and it’s beyond my pay grade to figure it out anyway. My only hope is that by writing this up, someone (or perhaps more than one someone) will be able to use what I’ve discovered to help make sense out of it and put this sort of crap to an end.

If you have thoughts about this, don’t hesitate to comment below or hit me up on Twitter. Thanks.

Reason #478 to Update WordPress and Plugins

Dumb. Really Dumb.
Photo via BigStockPhoto.

We all know we shouldn’t let an old WordPress site sit around without updating it. It’s dangerous, they say.

And… for the most part, I’m really good about staying on top of this—at least when it comes to mission-critical sites. But… I’ll admit, there are a few sites that I built and forgot about.

One in particular came to my attention this week. It was a site I built around a hobby of mine. It needed a WordPress upgrade.

Okay… it had missed a lot of WordPress upgrades.

But worst of all: it had a plugin that was very old and had stopped being updated by its original developer. It was a stats plugin that I really loved back in the days before Jetpack gave us access to WordPress.com stats.

That particular plugin had a vulnerability which was exploited by some nasty malicious hacker.

How I Found Out I’d Been Hacked

This particular site was in one of my longest-standing hosting accounts… one I’ve had since 2006 with 1and1.com. I keep telling myself I’m going to clean that account out and move all the sites, but I just haven’t done it. That’s part of the reason I’ve let some of the sites go unpatched—because why patch ’em if you’re gonna move ’em, right?

<sigh>

Well… somewhere along the line, 1and1 started the practice of sending an email when they encountered something suspicious going on. In the past, they’ve notified my when SPAM emails started going out because of the TimThumb WordPress vulnerability and when their antivirus scanner found malware in a PHP file.

I’ve always been quick to respond when I see one of those, and it happened just a few weeks back. In that case, it just turned out to be an old inaccessible file that I’d renamed after fixing a previous problem.

On Monday of this week, I got another one of these emails:

Anti-virus scan reports: Your 1&1 webspace is currently under attack [Ticket XXXXXX]

Even though I was busy, I jumped right in to see what was happening. They identified a file that had been uploaded to my webspace, and when I saw where it was located, I knew exactly what was going on. That old plugin was still running on the site I mentioned earlier.

So… I logged in via FTP, downloaded a copy of the “malicious file” just so I could see it, and then deleted it and the entire plugin that it got in through.

No big deal.

Or so I thought.

Sites Down

Yesterday, I discovered that all of the sites in that hosting account were down. For most of them, I was getting a simple “Access Denied” error from 1and1 when I tried to load them up in my browser.

A minor panic set in as I went in and tried to discover what was going on.

What I found was very perplexing. The file permissions on the index.php file, the wp-config.php file, and a handful of other files in these sites were changed to 200.

If you aren’t familiar with Linux file permissions, 200 basically means that the file can’t be read by anyone. So… if that file happens to be critical to the running of your site, then… your site doesn’t work.

So… I changed the permissions on a couple of these files in one of the most important sites just to try to get it working. Oddly… within a few minutes of me setting the permissions to 644, they were automatically changing back to 200.

“Hmmmmm…. maybe there’s some malware still running in my account,” I thought to myself.

That’s when I noticed a whole bunch of database “dump” files in the root of my webspace. They looked like this:

dbxxxxxxxx.dump.lzo

Uh oh.

So… I replied to the email I’d gotten a few days earlier, and explained what was going on. This updated the “ticket” in 1and1’s Abuse Department so they could have a chance to respond.

After working on things for a few more minutes, I couldn’t stand it any longer. I dialed the 1and1 Support Department (something I truly hate to do) and waited. Within a short time, I was on the line with someone from India who had undergone a significant amount of accent reduction, and explained what was going on. When he was unable to find my ticked ID, I explained that I’d gotten an e-mail. He put 2 and 2 together and connected me with the Abuse Department.

Then… for the first time in the 8 years that I’ve had this account, I spoke to an American. I mean… fluent English. Clearly no foreign accent. And also for the first time, he knew something about what he was talking about!

He reviewed the ticket and was able to explain a little better what had occurred. Hackers had gotten in through unpatched software (which I knew) and had managed to execute shell commands with my account’s user privileges.

Within what must’ve been a very short period of time, they inserted malicious code into approximately 1,500 files in my webspace. This means that they infected even the WordPress sites that were all patched and running the latest versions.

All told, somewhere near 40 sites were infected.

1and1’s systems were automatically changing the file permissions for any infected files to 200 in order to keep anyone from accidentally downloading malware when visiting my sites.

So… then began the painstaking process of removing all the malicious code that had been inserted and bringing the sites back on line one by one.

Could This Happen To You?

Yes. And it’s just a matter of time.

I’m planning to write In this post, I provided more details about it and an update explaining exactly what to do if you fall victim to an attack like this. It’s not particularly difficult to fix, but if you have 1500 files across 40 sites affected, it’s gonna take some time.

WordPress Site Hacked: NoIndex and NoFollow All Links

Yes… You Know Who You Are

This morning I made the startling discovery that an important WordPress site belonging to one of our clients had been hacked.

A Little History

If you’ve heard me speak in the last 5 years, you know that I’m a huge believer in the power of content marketing. We regularly recommend and teach business blogging basics to our clients. We have no desire to turn them into bloggers per se, but we’ve trained them that producing fresh, high quality content is a fantastic way to achieve visibility online and even provide fodder for social media outlets like Facebook & Twitter.

So… one of our clients who hired us to build out their WordPress site and for whom we’ve provided a fair amount of training and coaching for some time now began to experience a decline in search engine rankings. In their case, WordPress is installed on a separate domain from their main website. Their main website was historically not performing well from a search engine point of view (although it was great from virtually every other perspective when it was built), so WordPress was being used as a way to help prop up the main site. And it worked. Really, really well.

Imagine my surprise, then, when this particular site began to drop in the rankings for no apparent reason. Nothing had changed that we could tell. We did a little research and paid attention to what the competitors were doing and could see nothing significant enough to account for the change. It was very much an anomaly, because all of our other clients who were doing what we trained them to do were doing just fine.

So today, quite by accident, we found the culprit.

The WPRef Plugin

We were reviewing a piece of content before it got published when we discovered that a couple of the links had a rel=”nofollow” attribute. The content writer who was working on it had no knowledge of how to manually create that type of link (we certainly don’t train people to do that… especially for links that are created intentionally for search engine purposes!), so we knew something was up.

I inquired a little further to find out where the link had come from, and the answer was, “I copied it from another post.”

Hmmmm…. well… I assumed at first that something had crept its way into an earlier post and perhaps it had been duplicated a couple of times. I wasn’t looking forward to hunting down the original link. As I heard someone say recently, it’s like looking for a needle in a needlestack! But then I noticed that there was more than one link acting that way. So… I used the WordPress “preview” function to take a look at how the new post would look, and decided to “view source code” to see if the changes I’d made were taking effect.

That’s when I noticed this:

Every link within the content had been modified with a and a rel=”nofollow” sitewide.

That would be a problem. The site’s being running for a while and there was a significant amount of content.

Digging a little deeper, I found that a plugin had been installed and given the name “WPRef”

We had backed up and upgraded the site to the latest version of WordPress on February 3rd. So… we checked our backup and found that the plugin was not contained in it. On the server, we found (via FTP) that a file called “wpref.php” had been copied to the /wp-content/plugins folder on February 10th.

Not only had the plugin been placed in that folder, it had been activated.

Checking a little deeper, we discovered that the plugin’s only function was to add a tag and a “nofollow” attribute to every outbound link in the site’s content.

This amounts to a very specific, malicious attack. The only purpose of it can be to cause Google (and other search engines too) to ignore the site’s links.

Needless to say, I was infuriated. We’ve taken steps to harden that particular site. All my searching and other efforts to find evidence that others have encountered a hack like this have turned up nothing. It appears that (at least for now) this is a one-off, one-shot hack job. It’s hard not to believe that this site was specifically targeted on purpose.

The amusing thing was that the plugin added an options panel into the “Settings” menu. Within that, it output a bunch of gibberish, including some Russion domain names.  In the “Active Plugins” area, it purported to have “code.google.com” as its “plugin site” and its author was listed as, “Sergei Brin.” I was so distracted by the infuration and frustration of the whole thing that I failed to recognize that it wasn’t just a Russian-sounding name to match the other Russian references… it’s the (botched) name of the famous Google co-founder.

Humorous.

So… we’ve saved a copy of this little piece of php code. Obviously, we’ve removed it from the site in question and have tested the site out. Our links are back to normal now. Presumably, this client’s search engine rankings will return back to their prior positioning. Actually, since the rankings were declining, we’ve stepped up the game for this client with some additional efforts and so the rankings should actually move higher than ever. So… if this was, in fact, a malicious attack which singled out this particular business… the plan has backfired.

Thanks. Whoever you are.

Long Awaited: BarCamp Sarasota!

Though it’s still in the early stages of getting organized, I’m thrilled to announce the recent discovery of BarCamp Sarasota! Some old friends along with some friends I’ve not yet been introduced to are responsible for making this happen.

Things have gotten underway with a new home on the web and a Ning group which is all accessible at the BarCamp Sarasota website. Already there’s been an organizational meeting and another one is on the calendar.

So… calling all techies, bloggers, social media types, programmers, eggheads, geeks, propellerheads, etc.

Get over there and check out what’s going on… then get involved!

Then perhaps if there are enough WordPress users around the Sarasota area, we can manage to put together a WordCamp too!

(With apologies to all of my Geek friends for the photo… I’d hate to be accused of trafficking in stereotypes! Especially when we’re planning the takeover of the world! Oh… and… for the record, that is NOT a picture of me… from any point in my life!)