We planned to divide & conquer, but ended up both catching the session “How to Keep a Client Happy” by Christina Siegler on the Content & Design track.
After that session, I snuck over to the Development track to hear a couple of more technical sessions, and Jill stayed for more Content & Design goodness. She spoke very highly of the session with Michelle Schulp on “Becoming The Client Your Developer Loves”—so much so that I’m planning to catch the recording.
In “Writing Multilingual Plugins and Themes,” John Bloch didn’t shy away from tech issues, and he dug right into code samples while explaining the concepts around internationalization (“I18N” for short).
Then I caught Chris Wiegman, whom I’ve gotten somewhat acquainted with since he relocated to paradise Sarasota a little over a year ago. He’s known as an expert in WordPress security, and his “Application Security For WordPress Developers” was entertaining, informative, and thorough… not to mention somewhat over my head in spots.
On my way to the Development track, I bumped into Pam Blizzard, one of the organizers of the WordPress community in Sarasota.
I’ll try to come back and fill in more about our experience as time permits!
There was an authentic, vulnerable talk on getting the most out of the WordPress community from Marc Gratch. He shared some very personal experiences (that I’m sure many of us can identify with) about working alone & working remotely, and how the amazing WordPress community can be a great support system.
His “give more than you get” approach was fantastic, and true to form, he gave a great of resources he’s built over time:
Then a fast-paced session on building a 6-figure email list with Syed Balkhi, creator of Opt-In Monster, WPBeginner, and many other sites & tools.
Then I caught up with Jill and we got some great lessons from Dr. Anthony Miyazaki about what is an acceptable number of times to dip your chip into the guacamole. He showed how you have to plan ahead so that you have enough of your chip left to really maximize your dip.
One of the serious considerations of our time is the need to store and have reasonably usable access to all the digital media we are creating.
How often do we snap a photo and upload straight from our mobile devices to services like Instagram and Facebook?
How easy is it, using the apps on our phones, to bang out a tweet or a status update?
But have you ever given any thought to what might happen if those sites disappeared? How much of your personal life is recorded there?
Consider my own situation.
I joined Facebook in 2008, coming up on 8 years ago now, and have had countless meaningful interactions there with people I care about (let’s set aside all the less meaningful interactions for the moment).
In that time, I’ve been through maybe 6 or 7 smartphones. I’ve snapped thousands of photos, many of which I have no idea where to find at the moment*, but some of which I have uploaded to sites like Facebook, Twitter, and various iterations of what is now Google Photos.
Unlike in decades past, today we simply don’t “print” the photos we take (I can’t think of a good reason why I would, frankly), but this means that we also don’t give much consideration to what happens to those photos—not to mention our personal interactions and communications, and even stuff we upload to the web or social networks—after the fact.
I don’t purport to have all the answers. In fact, my purposes in writing this post today are more around sparking some thought rather than speaking to specific solutions, which almost certainly will vary from person to person.
But if you treat your social media profiles like a de facto backup of some of your most treasured photos (like I have), and you’ve had meaningful interactions with others on social networks (like I have), then an important question needs to be raised:
What would you lose if one or more of these sites were to shut down?
This week, I spent a fair amount of time getting better acquainted with some of the principles established by the #Indieweb community. This is a group of people committed to the creation and viability of the “open web.”
The terminology around the “open web” is used to draw a distinction between the web that can and should be created and used by individuals, as opposed to the “corporate web,” which is centered around commercially driven services.
One of the goals of the movement is to keep the web open and free. This doesn’t exclude the usage of paid services—on the contrary, it’s clear that even users of the open web will need to pay for services like domain registration and web hosting (although there are, as I discovered this week, more free options for those items than I would’ve guessed).
In fact, the distinction between the “free and open” web and the “corporate” web isn’t so much one of payment, but rather of ownership, access to, and control over one’s own data.
To illustrate this, IndieWebCamp, one of the groups central to the #IndieWeb movement, maintains a list of “site deaths,” which are often free (but not always) services for users to write blogs and upload/store/share photos, among other things, but which have famously shut down over the years. Often, this leaves users with little or no opportunity to download the data they’ve stored on these services.
Examples? When Geocities shut down in 2009, something like 23 million pages disappeared from the web. Previously, AOL killed off AOL Hometown, removing more than 14 million sites from the web. Google has killed off a number of products, including Google Buzz, Google Reader (which personally affected me), Google Wave, and countless others.
In many cases, users had even paid for the services, but due to a variety of factors, such as:
lack of profitability
changes in ownership
shifts in direction, and even
loss of interest on the part of the owner(s)
…the services get shut down anyway.
There are a couple of tragic ramifications of these site deaths.
One is that often the people most harmed are the ones least knowledgeable about setting up and maintaining their own web presence.
Often the appeal of a free or inexpensive blogging platform (for example) is that one doesn’t need to gain any real know-how in order to use it.
While that’s great in terms of getting people to get started publishing on the web or otherwise using the web (which I’m certainly in favor of), it has often ultimately sucker-punched them by never creating an incentive (until it’s too late, of course) to gain the minimal amount of knowledge and experience they would need to maintain something for themselves.
Even when the users are given the opportunity to download their data, which is not always the case, these are the very people least likely to know how to make use of what they’ve downloaded.
Another tragic loss is for the web community at large. When a service of any significant size shuts down, often this results in the loss of tremendous amounts of information. Vanishing URLs means broken links throughout the parts of the web that remain, which makes the web less useful and more costly to maintain for us all.
Some of what is lost is of more value to the individuals that originally uploaded or published it than to the rest of us, of course. But even personal diaries and blogs that are not widely read contribute to our large-scale understanding of the zeitgeist of the times in which they were created, and that is something that could be preserved, and for which there is value to us from a societal perspective.
Geocities, as an example, has accurately been described as a veritable time capsule of the web as it was in the mid-1990s.
Maintaining Our Freedoms
At the risk of being accused of philosophizing here, I’d like to step away from the pragmatic considerations around the risk of losing content we’ve uploaded, and look for a moment at a more fundamental risk of loss: our freedom of speech.
The more we concentrate our online speech in “silos” controlled by others, the more risk we face that our freedoms will be suppressed.
It’s a simple truth that centralization tends toward control.
Consider this: according to Time, as of mid-2015 that American Facebook users spend nearly 40 minutes per day on the site.
According to a study published in April, 2015, a team of researchers found that the majority of Facebook users were not aware that their news feed was being filtered and controlled by Facebook. (More on this here.)
As a marketer, I’ve understood for many years that as a practical consideration, Facebook must have an algorithm in order to provide users with a decent experience.
But the question is, would Facebook ever intentionally manipulate that experience in order to engineer a particular outcome?
So… we’re spending an enormous amount of our time in an environment where most of the participants are unaware that what they see has been engineered for them. Furthermore, the audience for the content they post to the site is also then being manipulated.
Let me emphasize that it’s clear (to me, at least) that Facebook has to use an algorithm in order to provide the experience to their users that keeps them coming back every day. Most users don’t realize that a real-time feed of all the content published by the other Facebook users they’ve friended and followed, combined with content published by Pages they’ve liked, would actually be unenjoyable, if not entirely unusable.
But the logical consequence of this is that a single point of control has been created. Whether for good or for ill—or for completely benign purposes—control over who sees what we post exists. Furthermore, anyone is at risk of having their account shut down for violating (knowingly or unknowingly, intentionally or otherwise) a constantly-changing, complex terms of service.
So… even if you aren’t concerned about a service like Facebook shutting down, there remains the distinct possibility that you risk losing the content you’ve shared there anyway.
In other words, someone else controls—and may, in fact, own—what you’ve posted online.
What Can We Do?
All of this has strengthened my resolve to be committed to the practice of owning and maintaining my own data. It isn’t that I won’t use any commercial services or even the “silos” (like Facebook and Twitter) that are used by larger numbers of people, it’s just that I’m going to make an intentional effort to—where possible—use the principles adopted by the IndieWeb community and others in order to make sure that I create and maintain my own copies of the content I create and upload.
There are 2 principal means of carrying out this effort. One is POSSE: Publish on your Own Site, Syndicate Everywhere (or Elsewhere). This means that I’ll use platforms like Known in order to create content like Tweets and Facebook statuses, as often as practical, and then allow the content to be syndicated from there to Twitter and Facebook. I began tinkering with Known more than a year ago on the site social.thedavidjohnson.com.
As an example, here is a tweet I published recently about this very topic:
Spending some time this week getting better acquainted with the #indiewebcamp community. Lots to learn!
While it looks like any other tweet, the content actually originated here, where my personal archive of the content and the interactions is being permanently maintained. This works for Facebook, as well.
I’m making the decision now to gradually shift the bulk of my publishing on social networks to that site, which will mean sacrificing some convenience, as I’ll have to phase out some tools that I currently use to help me maintain a steady stream of tweets.
The payoff is that I’ll have my own permanent archive of my content.
In the event that I’m not able to find suitable ways to POSSE, I will begin to utilize the PESOS model: Publish Elsewhere, Syndicate to your Own Site.
Since some of the silos that I use don’t permit federation or syndication from other platforms, I’ll be pulling that content from the silo(s) in question back to my own site. An example is Instagram, for which inbound federation is currently difficult, but for which outbound syndication (back to my own site) isachievable.
Not as Hard as it Sounds
I am, admittedly, a geek. This makes me a bit more technically savvy than some people.
But… the truth of the matter is that this really isn’t hard to set up. The IndieWebCamp website provides an enormous wealth of information to help you get started using the principles of the IndieWeb community.
And it can begin with something as simple as grabbing a personal domain name and setting up a simple WordPress site, where if you use the self-hosted version I’ve linked to, you’ll have the ability to publish and syndicate your content using some simple plugins. Alternatively, you could use Known, which has POSSE capabilities (and many others) baked right in.
There are loads of resources on the web to help you take steps toward owning and controlling your own data.
Note: For those who live in or around Sarasota, if there’s enough interest, I’d be open to starting a local group (perhaps something of a Homebrew Website Club), to help facilitate getting people started on this journey. Respond in the comments below or hit me up on Twitter if you’re interested.
Personal Note of Gratitude
I’m indebted to a long series of leaders who have worked to create the open web and have personally influenced me over a number of years to get to where I am today in my thinking. There are many, but I’d like to personally thank a few who have had a greater direct impact on me personally. They are:
Matt Mullenweg, co-founder of WordPress. Matt helped me understand the important role of open source software, and although he didn’t invent the phrase, he personally (through his writings) introduced me to the idea of “free as in speech, not free as in beer.”
Kevin Marks, advocate for the open web whose tech career includes many of the giants (e.g. Google, Apple, Salesforce, and more). Kevin understands the technology, the ethical and societal implications of factors effecting the open web, and has taken on the responsibility of serving as a leader in many ways, including in the IndieWeb community.
Ben Werdmuller, co-founder of Known. Ben and his co-founder, Erin Jo Richey, have also stepped up as leaders, not only creating technology, but endeavoring to live out the principles of the open web.
Leo Laporte, founder of TWiT. As a broadcaster, podcaster, and tech journalist, Leo was instrumental in introducing me to people like Kevin Marks and Ben Werdmuller by creating and providing a platform for concepts like these to be discussed.
As I said, there are plenty more I could mention. In today’s world of the internet, we all owe an incredible debt of gratitude to many who have worked tirelessly and often selflessly to create one of the greatest platforms for free speech in all of history. Their legacy is invaluable, but is now entrusted to us.
Let’s not screw it up.
*I’ve got most of them. They’re stored on a series of hard drives and are largely uncatalogued and cumbersome to access. Obviously, I need to do something about that.
Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.
For the record, I don’t own a Samsung Smart TV. And this sentence doesn’t say anything that any of us wouldn’t have guessed… had we thought about it.
But… how many devices do we own today that are listening all the time? And exactly how much of what we say is being recorded and sent to 3rd parties for “voice recognition?”1
I can think of a handful of other devices which are actively listening all the time and are often found in our homes (like the Xbox One / Kinect) or even on our persons (e.g. Google Now on Android — “OK Google” anyone?) and in newer automobiles.
Unnecessary Cause for Alarm?
I would imagine that the bulk of information being transmitted out of our living rooms via Samsung TVs is largely uninteresting to anyone.
But what are the policies that govern the storage (long term or short term) of this data? How sophisticated are the tools that interpret speech? Are transcripts of this speech stored separately or together with audio recordings?
What government agencies have or will gain access to either audio recordings or speech transcripts?
Perhaps the data doesn’t get stored by anyone for any longer than it takes to decide if you’ve issued a command to your device. And maybe there is no reason to even question what happens to all of the information scooped up by these listening devices.
I don’t want to sound like a conspiratorial alarmist. But on the other hand, maybe keeping some tinfoil close by isn’t such a bad idea…
1Geek moment: “voice recognition” is likely a misnomer. It is quite commonly and quite incorrectly used to refer to technologies that recognize speech. True “voice” recognition is a much different technology than “speech” recognition, and involves identifying who the speaker is rather than what he or she is saying. If Samsung or its 3rd-party vendor does have “voice” recognition, that’s a completely different cause for alarm.
This “big little city” that we call home and affectionately refer to as “Paradise” has been recognized by Google for having the, “strongest online business community,” in the State of Florida.
The award represents Google’s belief that businesses in Sarasota are embracing new technology to find and connect with customers.
Google uses its own data, including Search, ad revenue (both fees paid to Google by advertisers and fees paid by Google to publishers), and Ad Grants (provided by Google to non-profits) to estimate the economic impact of Google on each area. This forms the basis of its determination that local businesses are embracing technology.
The Herald-Tribune apparently also reported on the award, but their absurd paywall prevents us from accessing the article, so we won’t bother to link to it.
Congratulations to all of our local businesses who have endeavored to build out a presence online, use social media and other tools, and effectively generate a return on investment with digital advertising tools.
TL;DR: The RAM wasn’t seated properly. Like… for a long time. And I’m a geek who should know better. Try reseating your RAM.
I’ve had this machine for almost 3 years (in itself a record, but that’s another blog post). It was a nice middle-of-the-road machine that I bought after an uncharacteristically brief research period. Suffice it to say that I wasn’t expecting it to last long, since I am (at times) a bit of a road warrior and it was purchased to be my daily driver.
I realized it was slowing down some about a year after purchasing it. So, I did the obvious and bought new RAM for it. In fact, I doubled the RAM that day… or at least that was the intention. I happily removed the factory-installed 2GB memory sticks and popped in fresh 4GB ones.
Imagine my horror when, on boot, Windows reported 4GB of RAM.
What?! There must be some mistake.
I shut the machine down, re-seated the new RAM (and verified that I had, in fact, put the new sticks in). Rebooted. 4GB.
One of the new sticks must be bad, I thought.
So… I swapped them. Still 4GB. So… it isn’t the sticks. Must be one of the slots.
So… I booted up with a stick in only one of the 2 slots. Machine worked. 4GB.
With a stick in only the 2nd slot, the machine never came up.
Just to be sure, I put the original RAM in. Booted up with what should’ve been 4GB (2 x 2GB sticks). BIOS and Windows both reported only 2GB.
Shoot. The 2nd slot is dead. No wonder it’s been running slow!!
I contacted Gateway, since I was just inside the warranty period. After explaining my predicament, they authorized an RMA. All I had to do was ship the machine in.
That was 2 years ago. I didn’t have time then, nor have I had it since, to be without my daily driver for the time it would take them to fix it up and ship it back.
So… I decided—more through inaction than anything else—to live with it. And it really hasn’t been too bad, frankly.
A few months ago, I decided that an SSD upgrade would be a nice boost, and that did wonders for the machine’s performance. In fact, it was so nice that it made me think I might be able to hang on to this machine for maybe even a whole year more!
But for the last few months… I’ve started to really bump up against the upper limits of this thing’s performance. Maybe its my habit of having too many Chrome tabs open… or maybe everything just uses more resources now… but with 2 screens full of Google Chrome and one of the Adobe products (usually Photoshop) running, I’d find that my physical memory usage was at 99%. Even worse: I started getting warnings about low memory.
So… today, on a whim, I decided to open the case and just try to fix it.
I could never understand why on earth there was no physical sign of difficulty. The slots both appear to be soldered nicely to the motherboard. There’s no hint of cracking on the motherboard itself, nor on the physical structures that make up the slots. The machine has undergone no trauma of which I’m aware… unlike the machine before this, which I managed to run over with my convertible one day.
So… I went through the gamut of test all over again. All this time, I’ve had a RAM stick sitting in the “dead” slot not doing anything. It never seemed like there was a good reason to remove it, so I left it.
After doing some tests… even flashing the latest BIOS from the manufacturer, I was unsuccessful and not really getting anywhere. So… I ran some Google searches about dead memory slots. I even ran across one post that showed a nifty memory slot fix involving a guitar pick. It just so happened that I had a guitar pick handy, but that didn’t help.
Now… let me just say that for the last 21 years, my daily work has revolved around technology. For large chunks of that time, fixing technology was even a major component of my life. I do my own IT support, and always have. In fact, right or wrong, I handle all of our own internal IT needs.
…which is why I feel really stupid saying what I’m about to say.
I don’t honestly know know which board I was reading (I’ve gone back to look at the pages I visited today while trying to solve this, and I haven’t found it)… but some joker in a thread about dead memory slots actually made a remark that went something like this:
Any chances you seated the RAM incorrectly 3 times in a row? I’ve done it.
I didn’t think too much about it at the time… probably due to my vast IT experience. But as I continued tinkering, it started to haunt me.
Wouldn’t you know it?
I opened everything back up, looked at slot number 2, and realized the memory stick wasn’t seated.
Could it be that simple? Have I done without the full capacity of my hardware for 2 years over a failure to seat a memory stick properly?
I’m typing this on my newly responsive machine with 8GB of RAM.
Since none of us use cash anymore (except for that one guy in accounting), often your PIN code is the only thing standing between a would-be thief and the piles of treasure you have stashed in your checking account.
Actually, the card plus PIN number is a reasonably good, if simple, implementation of the “something you have” plus “something you know” principle of security. Neither the card nor the PIN number is much good without the other. (We’re ignoring the fact that most debit cards can also be processed as credit cards for the moment.)
Obviously, hanging on to the card itself is a good start, so that covers the “something you have” side of the equation. But sleight of hand, accidental drops, and old-fashioned purse-snatching still happen today.
So that leaves us with the “something you know” piece: your PIN.
Why Be Concerned About Infrared PIN Theft?
Being a security-minded person, I’m sure you’re already in the habit of covering your fingers when entering PIN numbers. After all, it takes only a tiny bit of effort, and it prevents cameras and sneaky eyes from catching what you’re entering, right?
But what about heat?
You did know your fingers transferred heat to those keys, right?
And since heat dissipates at a linear rate, the heat signature reveals not just which keys got pressed, but also the order in which they were pressed!
But that’s not really a problem, right? After all, who has equipment that can detect heat?
Until recently, the ability to walk up to a PIN pad and detect which buttons had just been pressed required an expensive (and bulky!) infrared camera that would pick up the heat signature left by your fingers.
But with the advent of relatively inexpensive ($349) iPhone attachments, infrared smartphone camera technology is easily within reach of a ne’er-do-well… especially since they might recoup that much or more in just one ATM transaction. But even for one who’s looking for something less expensive (or who uses an Android device instead of an iPhone), there’s this Kickstarter project, or even a tutorial on how to build one with an old floppy disk! (…for the Macgyver types, evidently).
In other words: stealing your PIN even up to 1 minute after you enter it is pretty easy these days.
So What’s the Solution?
It’s pretty simple, really. Just touch your fingers to several buttons and hold them there while you’re entering your PIN.
Heat multiple buttons up, obfuscate the ones you pressed.
Not so sure about all of this? Mark Rober made this video to demonstrate:
Oh yeah… and don’t use PINs that are easy to guess!
One of my great disappointments in life came several years ago when I made the switch to a 64-bit OS for the first time: a 64-bit build of Google Chrome simply did not exist!
OK, I might be exaggerating my disappointment. But only slightly.
But life went on. After a while, my incessant checking for news on this all-important development slowed from daily… to weekly… to… I can’t even remember when I last looked.
And to be honest, I haven’t cared. 32-bit Chrome has been fine… until the last couple of months. I’ve noticed it has begun to consume more and more of my aging laptop’s finite memory. This could, of course, have something to do with the sheer number of tabs and background apps (running in Chrome) that I have open. But that’s beside the point.
Your browser is, after all, likely to be your single most-used piece of software—especially if (like me) you long ago ditched other email clients.
But it was late when I saw it, so I waited till this morning to install it.
The upgrade process to 64-bit Google Chrome was fairly simple, but one step left me questioning whether it had worked, so…
How to Upgrade to the 64-Bit Version of Chrome
There’s currently no upgrade path within Chrome itself to get you over to the 64-bit development channel—making the switch is a manual opt-in process. Here’s how to do it:
Head on over to the official Chrome download page and look for the line that says “You can also download Chrome for Windows 64-bit.” Click the bold words “Windows 64-bit,” which will switch things around so that when you hit the big blue “Download Chrome” button, you’ll get the one you want. Currently, you’re out of luck if you’re a Mac user. (Linux users have had access to 64-bit Chromium for a while now.)
Optional step: At this point, I bookmarked all my open tabs just in case they got lost during the upgrade process. I wasn’t sure how this was gonna go down… so, I’d rather be safe than sorry. I then closed Chrome.
Double-click the Chrome-Setup.exe file that you just downloaded and let it run. This ran and completed, leaving me wondering what the heck had happened. Did it update my Chrome shortcuts in the Start Menu, Taskbar, and Desktop? I don’t know? Will I still be launching the 32-bit version of I click one? I don’t know!
Launch Chrome again. If you’re experience is like mine, all your tabs will reopen and everything will go back to the way it was. Hmmmmm….
Head over to your hamburger menu and click the “About Google Chrome” item (or just open a tab and type chrome://chrome/ in the address bar). You’ll see a message that reads something pretty close to “Google Chrome is almost finished updating. Relaunch Chrome to complete the update.” (I didn’t screen shot it, but you’ll know it when you see it.) There’s a handy “Relaunch” button.
When Chrome restarts, check chrome://chrome/ again. You should see a shiny new version message like Version 37.0.2062.94 unknown-m (64-bit). The beauty is the “(64-bit)” at the end, of course.
So How Is It?
OK so it’s admittedly a bit early for real serious feedback here. But my preliminary thoughts are pretty solid.
So far, I can’t tell that it’s making any better use of memory (this is one of its promised benefits thanks to they availability of better addressing). But, it’s nice and zippy. The memory usage may not have actually been the real problem I’d been experiencing with the 32-bit version. We’ll see.
Fonts are visibly better. For whatever reason, Chrome has been really bad with font rendering… so much so that I almost made the switch to Firefox over it! This has made me happy.
I’ve had no problems with any of my extensions. I wasn’t expecting any, but the announcement post on the Chromium blog and the Ars story both mentioned lack of support for 32-bit NPAPI plugins. This means you may need to update Silverlight and Java. (I haven’t tried Netflix yet, but I don’t use it on my computer very often anyway. We’ll see what happens.)
All in all… so far, so good. I’ll plan on a more thorough write-up after I’ve had some time behind the wheel. But for now… I’d say go for it!
There was no way to log in to the support site to post my issue… because I couldn’t log in!
So… after I responded via Twitter to that effect, I was pleasantly surprised to receive an email from Amazon support. I’m still not 100% certain how they figured it out, but they managed to locate my account and my email address just from my tweets. (I actually think I got doxxed by Amazon Support, but that’s OK!)
Ultimately, they called me as well, because I wasn’t able to get logged in to reply to the case ID that had been established for me.
As Amazon continued to work on their end, I also engaged in some troubleshooting on my own:
Tried to sign in from another browser. I normally use Chrome for my daily driver, but I tried to login from Firefox
Tried to sign in using Firefox “Private Window” to eliminate the browser cache and any cookies that might be affecting sign-in.
I actually busted out Internet Explorer (cringe!). Since this is a fairly recent install of Windows 7, I knew that I had never logged in to an Amazon account from IE, so that also gave me a fair test without the normal Amazon cookies and browser cache..
Used my wife’s laptop to try to sign in.
In every case, I received the exact same error.
One of the messages I received from AWS Support suggested that I attempt to login to another account. Although my AWS setup is all connected to my primary Amazon account, I did have another account or 2 that I could try. I was successful in logging in right away using the first account I tried.
So…. I was able to conclude that the issue was directly connected to my Amazon account and didn’t have anything to do with my browser, cookies, or cache.
Since Amazon uses an OAuth process to facilitate single-sign-on to multiple Amazon properties via a single account, I thought, “I’ll just sign in and review my Amazon order history.”
No dice. Same error.
Ultimately, after Nolan from Amazon Web Services Support got me on the phone, I walked him through all that I had done. He told me how puzzled they were on their end, since everything in my account looked OK.
He first directed me to try to log in from a couple of specific locations, just to rule out user error (I’m guessing).
After some effort, he asked if I would click the “Forgot Password” link.
“Hmmm…. why didn’t I think of that?”
I guess it hadn’t occurred to me because I was too busy ruling out all sorts of other issues.
So, I used the password reset function and created a new password. That’s when I did see a warning from Amazon’s site about cookies. I wish I had noted the actual error (or taken a screenshot of it). I didn’t. The error message seemed to indicate that my browser wasn’t accepting cookies (I was back in Chrome now, so I knew it was accepting cookies).
At this point, I decided to go ahead and remove all Amazon cookies from Chrome. Once I did, I was able to login.
Thank you, Amazon support! Thank you, Nolan!
Apparently there was some corruption with my Amazon account on Amazon’s servers. I believe this because I saw the same error from every browser I tried to use—even from multiple machines. Apparently, the process for resetting my password cleared the issue!
Bottom line: If you see this error, reset your password. You may also need to remove Amazon cookies from your browser.
P.S. I was very impressed with the security procedures at Amazon. In every communication (I also tried to get support via chat), they took multiple steps to confirm my identity before proceeding. Kudos to them for establishing solid procedures for this!
Yesterday, I logged in via FTP to a separate hosting account on a completely different web host, and found some of the same signs that accompanied the original attack on my 1and1 account.
The first sign is a suspicious file in the root of the website. The filename is “.. ” — as in ‘dot dot space’
This is particularly insidious, because the filename is designed to make the file hard to find. This is because “..” by itself is a unix/linux standard for “parent directory.” (It’s the same way on Windows & DOS systems as well.)
Thus, if you aren’t paying attention and looking specifically for it, it’s hard to notice. Also, since most systems don’t give you any sign of the “space” in the filename, it’s hard to open the file. (Here’s where I have to give credit to a sysadmin at 1and1 for helping me discover the space in the filename. I kept telling him it was called “..” and he said, “that’s impossible.” He was right.)
Either way, I have found that you can simply rename the file and then download it via FTP to open it up and see what’s inside. Here’s the code inside the “.. ” file:
This is obfuscated somehow… perhaps encoded with base64 or some other method.
I’m not certain what it does, but my guess is that it only works when in combination with the code that was inserted into PHP files. Here are the filenames targeted by the attack:
While index.php & header.php are common filenames in a wide variety of php websites, wp-config.php is unique to WordPress. Thus, I’m fairly certain that the creators of this attack were particularly interested in attacking WordPress sites.
The wp-config.php file only shows up in the “root” folder of any given WordPress installation. On the other hand, index.php appears in a number of folders in a typical WordPress installation. Here are a few examples:
the “root” folder of the site
the wp-admin folder
the main folder of any given theme
the main folder of some plugins
The header.php file, on the other hand, is most likely to show up in one or more of your theme folders.
My guess is that whatever script gets uploaded to your server gets busy locating files that match those filenames and injecting the malicious code.
The code is intended to be hard to spot. First of all, the PHP files are edited without modifying their timestamps. Thus, they don’t look like they’ve been edited recently.
Also, the code contains an opening <?php tag, and then is immediately followed by 1183 spaces. This means that even if you open an infected file in a typical code or text editor, the malicious code will be so far off your screen that you won’t notice it. You can scroll down and see all of the untouched PHP code that you’re expecting to see in the file.
From being attacked in the past, I was already aware of both of those techniques, so I opened the files and scrolled all the way to the right, finding the code.
Here’s an exact copy of what’s being inserted into these files.
What Does This Code Do?
Well… the only reference to this particular attack that I’ve been able to find online is found in this thread (in German). That confirmed a suspicion I had held which led me to believe that there was something inserting some ad code into the WordPress admin pages (the “Dashboard” specifically) of my sites. Thus, it is only visible when logged in as an admin user, and is intentionally targeting WordPress site operators.
1and1 insisted that my sites were injecting malware into visitors’ browsers. Perhaps this is the malware. Perhaps the code was doing more than just displaying the ads I saw.
In any case, I had originally attributed these ads to a recently-added Chrome extension which I immediately disabled.
Now that I’ve seen the German thread, I’m more convinced that the sites which were displaying that ad were, in fact, the ones infected with this malicious attack.
So… I have no proof as to what this code actually does. It’s all obfuscated and it’s beyond my pay grade to figure it out anyway. My only hope is that by writing this up, someone (or perhaps more than one someone) will be able to use what I’ve discovered to help make sense out of it and put this sort of crap to an end.