Recently, I wrote about the book, Learn to Program With Minecraft, and shared my experience getting set up to use the book with Ubuntu instead of with Windows or Mac OSX.
Yesterday I learned that the author of that book, Craig Richardson, appeared on this week’s episode of Triangulation with Leo Laporte. It’s a fun episode… they set up Leo’s Mac to run a local Minecraft server, and test out a bunch of fun stuff from the book. Well worth the watch!
Update 3/20/2016: Thanks to one of our readers, Fabrizio Fazzino, for pointing out that a software update since these instructions were prepared makes it necessary to modify them. Specifically, we’re changing how the Spigot Server component gets installed & used. I’ve updated the instructions below accordingly.
Also, he’s prepared a more succinct set of instructions that summarizes the steps. If you’re not interested in learning as much about how and why this works, I’d recommend you check my “Quick Note About Folder Structure” (in the yellow box below) and then follow his instructions in this comment, which Fabrizio was kind enough to post here since his original blog post is no longer accessible to the public.
Python is a programming language that I’ve long wanted to get acquainted with, and since she loves Minecraft so much, I felt like this book would be an ideal way for my daughter to also gain some exposure to it.
The only problem? We each use the Ubuntu distribution of Linux instead of Windows or Mac OSX.
You wouldn’t think this would be a problem: Minecraft is built in Java, which runs without a problem on Ubuntu (and many other platforms). Python is readily available for Ubuntu. Seems like a no-brainer, right?
Well… not quite. After the Amazon box arrived, I spotted this note on the back cover of the book:
The code in this book will run on Windows 7 or later, OS X 10.10 or later, or the Raspberry Pi. (See the last page for detailed requirements.)
No problem! The Raspberry Pi runs a special distribution of Linux called “Raspbian,” which is a version of Debian Linux, which is the upstream version of Linux that Ubuntu is based on. In other words: Raspbian & Ubuntu are cousins.
It seems reasonable, then, that if you can get this stuff working on the Raspberry Pi, then a much more powerful laptop running Ubuntu should be great!
Even more encouraging, there’s a nifty footnote at the bottom of Page 1 of the Learn to Program With Minecraft book which reads:
Since the book had already been out for a few weeks, this note gave me hope that perhaps some instructions for setting up all the tools on Ubuntu might’ve already been added. Unfortunately, this is not the case (yet, anyway).
So… I decided to try to do it anyway. Since author Craig Richardson and the No Starch Press team had prepared download packages for the Mac & Windows platforms, I figured that at the very worst, there would be some clues in those packages that might help me get going.
Getting Minecraft & Python Set Up On Ubuntu
First, here is a simple set of requirements (as I understand them) for you to be able to use the instructions in the Learn to Program With Minecraft book on Ubuntu:
Minecraft – this is the game itself. If you don’t already have a license for the game, you’ll need to pick one up and install it. “Installing” Minecraft for Ubuntu is quite easy: simply download the .jar file from your Mojang account and launch it. We had done this long ago, so this step was already completed for us.
Python – This is the programming language you’re learning. More on finding & installing it in a moment.
Java – while you probably have a basic interpreter (the “runtime environment”) for Java already, you’ll need the Java Development Kit to run this next tool..
Spigot Server – This is Minecraft “server” software, which you can run on the same computer that Minecraft will run on. You need this because the Python connection to Minecraft relies on a server plugin that you can’t just run with your plain old Minecraft installation.
Minecraft Python API (py3minepi) – It turns out that this connection between Python and Minecraft was originally developed especially for the Raspberry Pi. The way I understand it, this tool is an API for Minecraft that works with Python. You need it.
Raspberry Juice Some brave souls created Raspberry Juice as a way to run the Python/Minecraft connection on other platforms (not just the Raspberry Pi). When you follow the instructions in the book for Windows or Mac, this little gem is bundled in. But if you’re installing from scratch for Ubuntu, you’ll need to get it yourself. Not realizing this, I installed all the other tools and ran into a really nasty error that I couldn’t get around:
This error message was the part of the installation that was trickiest to resolve, but after a bit of digging, I was able to work it out.
The detailed instructions for each of these items follows (below). The one note I’d like to insert here is this:
I’m using Ubuntu 14.04 LTS, so your installation steps may vary somewhat if you’re using a different Ubuntu version.
Installing Python 3
You actually need 3 separate items that fall under the Python 3 heading:
Python 3 (the programming language itself)
IDLE (the development environment for Python, a/k/a the place where you’ll type commands and write programs)
PIP (the “package manager” for Python). You need this to install
For packages that are developed for Ubuntu, I tend to prefer using the “Ubuntu Software Center” to install stuff like this.
The only “gotcha” with Python is that there are a number of software versions and tools and so forth. So… launch the Software Center and search “python3” (with no space).
You should see a listing that says something like, “Interactive high-level object-oriented language (default python3 version)”
That’s the one you want. Do yourself a favor and click on “more info” so you can check the box for “IDLE 3” while you’re at it.
Install those, then run a similar search for “python3-pip” so you can use the package manager.
Prefer the command line to the Software Center?
Here are the commands to get python installed if you’d rather do this than use the Software Center. You’ll need to open a terminal to run these:
From the Ubuntu Software Center, just search “openjdk-7” and look for the “headless” option (this is lighter weight because it doesn’t include a GUI).
Or from the terminal:
sudo apt-get install openjdk-7-jre-headless
Installing Spigot Server
Update 3/20/2016 As I mentioned in the update at the top of this post, Spigot Server has released a new version: 1.9. Since the other components we’re using have not yet (as of today) updated to accommodate this, you’ll need to make sure that you download Spigot 1.8.8 and use it even though it is not the most recent version available.
Spigot is one of the most popular Minecraft server options, and is a necessary component in order to get Python & Minecraft talking to each other.
Getting the server software up & running is a matter of compiling the latest version. This reference page from the Spigot wiki is the one I used, and it seems to stay up to date. However, since it contains the instructions for multiple platforms, I’ll endeavor to simplify here.
One item to install first that will make this simpler is git. You’re going to need a terminal window anyway, so I’d recommend going ahead and opening it now and running this command to install git:
To help make things easier on yourself, you might find it useful to use a somewhat similar folder structure to the one described in Learn to Program with Minecraft for the Windows & Mac users.
To accomplish this for myself, I opened the “Files” application and browsed to my “Documents” folder, then created a new folder called “MinecraftPython”, then inside that folder another called “MinecraftTools”.
I recommend moving the BuildTools.jar file that you just downloaded into that “MinecraftTools” folder.
To do this, you have a few options:
You can drag and drop using 2 “Files” windows, or
you can cut & paste if you just have one of those windows open.
Otherwise, you can move the file from the command line in a Terminal window with something like: mv ./Downloads/BuildTools.jar ./Documents/MinecraftPython/MinecraftTools/BuildTools.jar. Of course, you’ll need to modify that command to suit your particular situation (if you’re using a different folder structure or starting from a different location in your Terminal window than I did, for example).
Once that’s done, from your Terminal window, you’ll need to change directories to the location of your BuildTools.jar file. Depending upon where you’re starting from, that might mean a command that looks something like: cd ./Documents/MinecraftPython/MinecraftTools.
Then you’ll want to execute these 2 commands:
git config --global --unset core.autocrlf
java -jar BuildTools.jar This needs to be tweaked to make sure you use version 1.8.8 of the Spigot Server component (for now).
java -jar BuildTools.jar --rev 1.8.8
This will get the Spigot Server built. In order to finish installing, creating a startup script will be helpful. You can create one with gedit by running a command like this:
The gedit text editor will open. Copy and paste this into the editor:
Note: the filename “spigot-1.8.8.jar” was the current filename as of this writing. You’ll need to confirm that filename based upon your build, and edit the command here if it’s different use that filename as is for now (until the other components are updated to accommodate newer versions of Spigot server). Also, the Spigot instructions specifically note that the ‘MaxPermSize’ directive is no longer needed or supported in Java 8, but since I’m using Java 7, I left it in mine.
Save the file and close gedit.
Next, you’ll need to make the script executable. From the same terminal window, type:
chmod +x start.sh
Before you launch this file, you’ll need to accept the End User License Agreement. Locate the eula.txt file in your “MinecraftTools” folder and open it for editing. You can do this from a terminal window by typing gedit eula.txt . From the “Files” application, you can right-click the eula.txt file and choose the option to edit it with gedit.
Before you change the line that reads eula=false to eula=true, you’ll want to review the Minecraft End User License Agreement and verify that you’re ready to agree to its terms. Once you are, changing the value to “true” and saving the file will allow you to launch the Spigot Server without a hiccup (assuming that it is installed correctly).
Starting Your Spigot Server
Once that’s completed, you can start the Spigot Server to ensure it’s working properly. You’ll use this same command start the server each time you need to use it:
If all has gone according to plan, you should see the output of the server startup process in your terminal window. The Spigot Server will create a new Minecraft world as it launches, and once it’s up and running, you’ll see a > prompt with a flashing cursor next to it. You need to keep this window open.
Testing Your Spigot Server Installation
To test your server, launch Minecraft as usual.
Click “Multiplayer” and then choose “Add Server”
Give your new local server a name. The book recommends Minecraft Python World for it. In the “Address” box, type localhost. There’s a picture at the top of page 17 of the book which you can refer to as an example.
Quick note: if you’re using a typical Minecraft installation, then your Minecraft version will have updated by now to a version newer than the Spigot Server version. If so, you’ll need to edit your “Profile” and specify the Minecraft version to run so that it matches your Spigot Server version (1.8.8 if you’re following this writing exactly). Alternatively, you can create a new profile instead (this is what I chose to do) so that your main Minecraft profile continues to use the latest Minecraft version.
You can double-click the Minecraft Python World and check out your new world.
Note:The author’s downloads for Mac & Windows operating systems are pre-configured to be in Creative Mode. This world will be in Survival Mode instead. This can be changed by editing the server.properties file in your “MinecraftTools” folder and changing the line that reads gamemode=0 to gamemode=1 . You may also find that you need to change the line that reads force-gamemode=false to force-gamemode=true .
Play as long as you like, but before proceeding: you’ll want to stop the Spigot Server. In the window with the flashing cursor, simply type stop at the > prompt, and the Spigot Server will save everything and shut itself down.
Installing the Minecraft Python API
Next, you’ll need the Minecraft Python API. There’s a Github repository here:
I recommend just hitting the “Download Zip” button there. The file will be saved to your “Downloads” folder. You’ll want to extract the .zip file’s contents. You’ll end up with a folder called py3minepi-master, which we need to copy into the “Documents/MinecraftPython/MinecraftTools/” folder.
Once the folder has been relocated to the “MinecraftTools” folder, we need to run a command to install it. From your terminal window (assuming your current directory is still the “MinecraftTools” folder), type:
sudo pip3 install ./py3minepi-master
Installing Raspberry Juice
The last piece, I believe, is the actual Raspberry Juice plugin for Spigot. You can find it on the project’s home page:
Look for the “Recent Files” link on the right. As of this writing, the latest was RaspberryJuice v1.7 for 1.8.1. Follow the links, and eventually, you’ll end up with a .jar file.
This file needs to be copied into the “plugins” folder of your Spigot Server installation. If you’re following the directions here specifically, then you’ll find that folder at “/Documents/MinecraftPython/MinecraftTools/plugins”
Put the .jar file in that folder. Your Spigot Server will automatically find it the next time it starts up.
Time to Test!
If all has gone well, you should be ready for the “Getting to Know IDLE” section of the setup instructions on Page 20 of the book. If you’re able to successfully run the tests there, you’ve got everything set up correctly!
It was at this stage that I got that nasty error I mentioned earlier:
When I got the “connection refused” error, I did a bunch of searching, but nothing seemed to fit. Ultimately, I hunted down the port number that the “minecraft.py” script was trying to connect to (port 4711). This didn’t make any sense, because the Minecraft server software defaults to port 25565. Searching around for information about what might be running (or not running, in my case) on port 4711 was what yielded the information about the Minecraft Python API.
Thankfully, author Craig Richardson left some clues for us in the pre-packaged downloads he made available for Windows & OSX. On my Ubuntu system, I opened up the download he prepared for OSX (since OSX and Linux are more or less cousins, I figured it would be worth a shot) and found Raspberry Juice. It was perhaps the least obvious component of this whole setup.
So far, this setup has been working for me. I’m not 100% certain that I haven’t missed something, but if I have, then it doesn’t appear to be affecting things so far.
I hope you find this helpful! Let me know what you run into in the comments. I’ll do my best to help!
One of the serious considerations of our time is the need to store and have reasonably usable access to all the digital media we are creating.
How often do we snap a photo and upload straight from our mobile devices to services like Instagram and Facebook?
How easy is it, using the apps on our phones, to bang out a tweet or a status update?
But have you ever given any thought to what might happen if those sites disappeared? How much of your personal life is recorded there?
Consider my own situation.
I joined Facebook in 2008, coming up on 8 years ago now, and have had countless meaningful interactions there with people I care about (let’s set aside all the less meaningful interactions for the moment).
In that time, I’ve been through maybe 6 or 7 smartphones. I’ve snapped thousands of photos, many of which I have no idea where to find at the moment*, but some of which I have uploaded to sites like Facebook, Twitter, and various iterations of what is now Google Photos.
Unlike in decades past, today we simply don’t “print” the photos we take (I can’t think of a good reason why I would, frankly), but this means that we also don’t give much consideration to what happens to those photos—not to mention our personal interactions and communications, and even stuff we upload to the web or social networks—after the fact.
I don’t purport to have all the answers. In fact, my purposes in writing this post today are more around sparking some thought rather than speaking to specific solutions, which almost certainly will vary from person to person.
But if you treat your social media profiles like a de facto backup of some of your most treasured photos (like I have), and you’ve had meaningful interactions with others on social networks (like I have), then an important question needs to be raised:
What would you lose if one or more of these sites were to shut down?
This week, I spent a fair amount of time getting better acquainted with some of the principles established by the #Indieweb community. This is a group of people committed to the creation and viability of the “open web.”
The terminology around the “open web” is used to draw a distinction between the web that can and should be created and used by individuals, as opposed to the “corporate web,” which is centered around commercially driven services.
One of the goals of the movement is to keep the web open and free. This doesn’t exclude the usage of paid services—on the contrary, it’s clear that even users of the open web will need to pay for services like domain registration and web hosting (although there are, as I discovered this week, more free options for those items than I would’ve guessed).
In fact, the distinction between the “free and open” web and the “corporate” web isn’t so much one of payment, but rather of ownership, access to, and control over one’s own data.
To illustrate this, IndieWebCamp, one of the groups central to the #IndieWeb movement, maintains a list of “site deaths,” which are often free (but not always) services for users to write blogs and upload/store/share photos, among other things, but which have famously shut down over the years. Often, this leaves users with little or no opportunity to download the data they’ve stored on these services.
Examples? When Geocities shut down in 2009, something like 23 million pages disappeared from the web. Previously, AOL killed off AOL Hometown, removing more than 14 million sites from the web. Google has killed off a number of products, including Google Buzz, Google Reader (which personally affected me), Google Wave, and countless others.
In many cases, users had even paid for the services, but due to a variety of factors, such as:
lack of profitability
changes in ownership
shifts in direction, and even
loss of interest on the part of the owner(s)
…the services get shut down anyway.
There are a couple of tragic ramifications of these site deaths.
One is that often the people most harmed are the ones least knowledgeable about setting up and maintaining their own web presence.
Often the appeal of a free or inexpensive blogging platform (for example) is that one doesn’t need to gain any real know-how in order to use it.
While that’s great in terms of getting people to get started publishing on the web or otherwise using the web (which I’m certainly in favor of), it has often ultimately sucker-punched them by never creating an incentive (until it’s too late, of course) to gain the minimal amount of knowledge and experience they would need to maintain something for themselves.
Even when the users are given the opportunity to download their data, which is not always the case, these are the very people least likely to know how to make use of what they’ve downloaded.
Another tragic loss is for the web community at large. When a service of any significant size shuts down, often this results in the loss of tremendous amounts of information. Vanishing URLs means broken links throughout the parts of the web that remain, which makes the web less useful and more costly to maintain for us all.
Some of what is lost is of more value to the individuals that originally uploaded or published it than to the rest of us, of course. But even personal diaries and blogs that are not widely read contribute to our large-scale understanding of the zeitgeist of the times in which they were created, and that is something that could be preserved, and for which there is value to us from a societal perspective.
Geocities, as an example, has accurately been described as a veritable time capsule of the web as it was in the mid-1990s.
Maintaining Our Freedoms
At the risk of being accused of philosophizing here, I’d like to step away from the pragmatic considerations around the risk of losing content we’ve uploaded, and look for a moment at a more fundamental risk of loss: our freedom of speech.
The more we concentrate our online speech in “silos” controlled by others, the more risk we face that our freedoms will be suppressed.
It’s a simple truth that centralization tends toward control.
Consider this: according to Time, as of mid-2015 that American Facebook users spend nearly 40 minutes per day on the site.
According to a study published in April, 2015, a team of researchers found that the majority of Facebook users were not aware that their news feed was being filtered and controlled by Facebook. (More on this here.)
As a marketer, I’ve understood for many years that as a practical consideration, Facebook must have an algorithm in order to provide users with a decent experience.
But the question is, would Facebook ever intentionally manipulate that experience in order to engineer a particular outcome?
So… we’re spending an enormous amount of our time in an environment where most of the participants are unaware that what they see has been engineered for them. Furthermore, the audience for the content they post to the site is also then being manipulated.
Let me emphasize that it’s clear (to me, at least) that Facebook has to use an algorithm in order to provide the experience to their users that keeps them coming back every day. Most users don’t realize that a real-time feed of all the content published by the other Facebook users they’ve friended and followed, combined with content published by Pages they’ve liked, would actually be unenjoyable, if not entirely unusable.
But the logical consequence of this is that a single point of control has been created. Whether for good or for ill—or for completely benign purposes—control over who sees what we post exists. Furthermore, anyone is at risk of having their account shut down for violating (knowingly or unknowingly, intentionally or otherwise) a constantly-changing, complex terms of service.
So… even if you aren’t concerned about a service like Facebook shutting down, there remains the distinct possibility that you risk losing the content you’ve shared there anyway.
In other words, someone else controls—and may, in fact, own—what you’ve posted online.
What Can We Do?
All of this has strengthened my resolve to be committed to the practice of owning and maintaining my own data. It isn’t that I won’t use any commercial services or even the “silos” (like Facebook and Twitter) that are used by larger numbers of people, it’s just that I’m going to make an intentional effort to—where possible—use the principles adopted by the IndieWeb community and others in order to make sure that I create and maintain my own copies of the content I create and upload.
There are 2 principal means of carrying out this effort. One is POSSE: Publish on your Own Site, Syndicate Everywhere (or Elsewhere). This means that I’ll use platforms like Known in order to create content like Tweets and Facebook statuses, as often as practical, and then allow the content to be syndicated from there to Twitter and Facebook. I began tinkering with Known more than a year ago on the site social.thedavidjohnson.com.
As an example, here is a tweet I published recently about this very topic:
Spending some time this week getting better acquainted with the #indiewebcamp community. Lots to learn!
While it looks like any other tweet, the content actually originated here, where my personal archive of the content and the interactions is being permanently maintained. This works for Facebook, as well.
I’m making the decision now to gradually shift the bulk of my publishing on social networks to that site, which will mean sacrificing some convenience, as I’ll have to phase out some tools that I currently use to help me maintain a steady stream of tweets.
The payoff is that I’ll have my own permanent archive of my content.
In the event that I’m not able to find suitable ways to POSSE, I will begin to utilize the PESOS model: Publish Elsewhere, Syndicate to your Own Site.
Since some of the silos that I use don’t permit federation or syndication from other platforms, I’ll be pulling that content from the silo(s) in question back to my own site. An example is Instagram, for which inbound federation is currently difficult, but for which outbound syndication (back to my own site) isachievable.
Not as Hard as it Sounds
I am, admittedly, a geek. This makes me a bit more technically savvy than some people.
But… the truth of the matter is that this really isn’t hard to set up. The IndieWebCamp website provides an enormous wealth of information to help you get started using the principles of the IndieWeb community.
And it can begin with something as simple as grabbing a personal domain name and setting up a simple WordPress site, where if you use the self-hosted version I’ve linked to, you’ll have the ability to publish and syndicate your content using some simple plugins. Alternatively, you could use Known, which has POSSE capabilities (and many others) baked right in.
There are loads of resources on the web to help you take steps toward owning and controlling your own data.
Note: For those who live in or around Sarasota, if there’s enough interest, I’d be open to starting a local group (perhaps something of a Homebrew Website Club), to help facilitate getting people started on this journey. Respond in the comments below or hit me up on Twitter if you’re interested.
Personal Note of Gratitude
I’m indebted to a long series of leaders who have worked to create the open web and have personally influenced me over a number of years to get to where I am today in my thinking. There are many, but I’d like to personally thank a few who have had a greater direct impact on me personally. They are:
Matt Mullenweg, co-founder of WordPress. Matt helped me understand the important role of open source software, and although he didn’t invent the phrase, he personally (through his writings) introduced me to the idea of “free as in speech, not free as in beer.”
Kevin Marks, advocate for the open web whose tech career includes many of the giants (e.g. Google, Apple, Salesforce, and more). Kevin understands the technology, the ethical and societal implications of factors effecting the open web, and has taken on the responsibility of serving as a leader in many ways, including in the IndieWeb community.
Ben Werdmuller, co-founder of Known. Ben and his co-founder, Erin Jo Richey, have also stepped up as leaders, not only creating technology, but endeavoring to live out the principles of the open web.
Leo Laporte, founder of TWiT. As a broadcaster, podcaster, and tech journalist, Leo was instrumental in introducing me to people like Kevin Marks and Ben Werdmuller by creating and providing a platform for concepts like these to be discussed.
As I said, there are plenty more I could mention. In today’s world of the internet, we all owe an incredible debt of gratitude to many who have worked tirelessly and often selflessly to create one of the greatest platforms for free speech in all of history. Their legacy is invaluable, but is now entrusted to us.
Let’s not screw it up.
*I’ve got most of them. They’re stored on a series of hard drives and are largely uncatalogued and cumbersome to access. Obviously, I need to do something about that.
So… working as a consultant who does a great deal of writing, training, and research online, I sit at my desk a lot more than I’d like to admit.
I’ve had great advice some phenomenal people in my life, especially my chiropractor, about how to sit (or really how not to sit), and thankfully, I’ve made some ergonomic adjustments.
But the fact remains that I sit far too much, I stand up far too little, and my posture as I work is still not too great.
[And as an aside: that’s most definitely not me in the picture accompanying this post. My desk is nowhere near that clean. And I’m usually not in a suit when I sit at it. But I chose the picture just because his posture will make my chiropractor physically ill.]
So, I was thrilled to learn that one of our clients, whom my wife & I actually consider a friend as well, has completed significant training and is now certified in the MELT Method. We’ve worked with her a great deal in the past, and benefited from her expertise in STOTT PILATES® and Applied Functional Science®, among other things. And given her propensity for extensive research, I was not surprised that she had hunted down the solution to her own chronic pain problems (hers were, alas, caused by overuse as opposed to underuse, which was more likely the cause of mine).
In any case, I think we’ll plan to attend some workshops and get better acquainted with MELT, but in our little “preview” session with her, Shannon gave us an understanding of some of the basics, and it was remarkable the degree to which we experienced results in just a few minutes. More to come on this one!
Note: Recently, I was assigned the task of writing about my favorite meal for a course I am taking. I was so hungry when I finished, I thought I would share.
My favorite meal consists of buffalo chicken wings with curly fries, carrot and celery sticks with ranch dressing, fried cheese cubes with cherry mustard sauce, and a nice cold beer of some imported variety.
If you’ve never tried this particular combination of healthy foods (and yes, I jest here about the healthfulness of this meal), you’re missing out one of the most delectable sensations of taste to ever cross the human palate. In particular, this meal should be enjoyed at a fine establishment (hint: it’s a dive) called “Wings N Things” on Cortez Road West, en route from the city of Bradenton, Florida to the sleepy fishing village known as Cortez.
Note: Yes, there’s another Wings N Things location (on Tallevast at Lockwood Ridge). And yes, I also enjoy that location—especially for its convenience. However, as both locations are no longer under their original ownership, I feel like the Cortez location’s owners have generally stayed more consistent with the practices of the founder. To some, that’s a positive. To others, it’s a strike against it. If it were closer, I’d probably eat at the Cortez location more often, but in practice, I find myself at Tallevast more often.
The reason this restaurant, in particular, should be chosen is that it serves, in my not-so-humble opinion, the quintessential flavor for the sauce that makes fried drumettes and wings qualify for the moniker, “buffalo.” While it’s available in milder forms (e.g. mild, medium, hot, and “TNT”), I recommend that you select the setting with the most “heat.” It’s a wonderful delicacy the founder of this restaurant named, “Napalm.”
Sandy, as she was known, must have been attempting to call to mind the burning sensation elicited by this bizarre substance used in chemical warfare (if that is indeed the correct term for it) as portions of the jungles of Southeast Asia were engulfed in flames during the Viet Nam conflict.
When the portions of fried yardbird are served to you as a patron, they appear on the table in a plastic boat lined with aluminum foil. Pooled in the bottom is a generous helping of this orange, aqueous substance, which has also been lavishly applied to the sticky exterior surfaces of the chicken pieces. Introduced into your mouth, the sauce ignites a veritable firestorm of flavors… simultaneously sweet, salty, vinegary, and — perhaps most importantly — hot. The heat comes from the particular combination of the peppers (mostly cayenne, but undoubtedly comprised of a selection of others which remain the secret of the proprietors) and the vinegar.
The effect is so remarkable that caution is to be advised when breathing the air above the meal because the heat from the freshly fried meat causes the pepper-infused vinegar fumes to become nearly noxious. Coughing and sputtering is normal for those neophytes who fail to recognize this.
In order to be properly enjoyed, the curly fries — long cut and fried to a crisp — should be doused in white vinegar and then heavily salted. The flavor of this accoutrement perfectly complements that of the poultry.
The celery and carrot sticks add an air, slight as it may be, of healthfulness to the meal. The fact that vegetables are being consumed with this fried fiesta is just enough to salve the conscience of the eater. Dipping the sticks in the small plastic containers of ranch dressing help round out the flavor profile of the meal.
As if the sensations crossing your taste buds weren’t yet salacious enough to tantalize, the deep-fried cubes of cheese are there to push everything beyond proper limits of enjoyability. Care must be taken to allow for the proper cooling of these little balls (one can only imagine that the cheese had been arranged in a cube shape before being breaded and deep fried) as the cheese — if it’s too hot — will explode into your mouth and sear the flesh thereof, properly ruining your ability to enjoy flavor for the rest of your meal. I must insist that at least some percentage of these little balls of dairy delicacies be dipped into the accompanying cherry mustard sauce. Having never located a similar sauce anywhere else, I can only speculate as to its origins. It doesn’t seem to be mustard-like or even cherry-like at all. Rather, it is a liquid with a mild reddish color that adds a nice spark of sweet flavor to the whole experience.
Of course, you may choose to wash all of this down with the beverage of your choice. For many years, this establishment served Pepsi products. Thus, a Mountain Dew was the imbibement of choice for those looking to add a non-alcoholic kick to the meal. Once the switch to Coca-Cola products was made, the only logical choice was an imported beer of some sort. I usually find the darkest option available, as I find it pairs best with the rest of the meal.
As my salivary glands are now working overtime just from the writing of this short essay, I feel compelled to submit my response and drive to this establishment post haste.
We’ve all known since that fateful Tuesday in 2001 that Sarasota had a connection to the events of the day we call 9/11. I’ve written previously about being held up by the Presidential motorcade as “W” made his way to Emma E. Booker Elementary school to read to the kids. Then there was the flight school in Venice where some of the “I don’t need to know how to land” hijackers trained.
Much later, we learned some bits and pieces about the Sarasota Saudis, and—perhaps the most concerning detail—that the FBI was playing cat and mouse about what it did and didn’t know.
On Monday of this week, though, a new set of documents emerged—documents that the FBI had previously failed to acknowledge even existed—that reveal even more bizarre details about the 9/11 Sarasota connection and what appears to be an FBI coverup.
Thanks to some extremely diligent efforts on the part of the Broward Bulldog and their ongoing investigative and legal actions, the FBI released the documents which, although heavily redacted, reveal information uncovered as far back as 2002.
An article published by the Broward Bulldog and picked up by the Miami Herald reveals the new details. They include a man dumping information into a dumpster behind a Bradenton storage facility, and a man who arrived in Sarasota, FL in November, 2001 harboring apparent intentions to purchase land and establish a Muslim compound in Florida that was (is?) feared to include carrying out or facilitating terrorist activities.
Yesterday, my brother sent me a link to this video (posted below). It features some analysis and commentary on the Olympics that you might actually find startling.
What do you think? Are the Olympics in fact:
shameless exploitation of athletes?
a justification for child labor and even abuse?
an enormous boondoggle of corruption that lines the pockets of the well-connected and powerful?
an irrational exercise in tribalism?
Share your thoughts in the comments below.
Admittedly, I’m just becoming aware of Stefan Molyneux, the creator of this video content. So it would be too much to treat my posting of this video as an endorsement. But he’s incredibly thought-provoking, and perhaps impossible to ignore.
We all know we shouldn’t let an old WordPress site sit around without updating it. It’s dangerous, they say.
And… for the most part, I’m really good about staying on top of this—at least when it comes to mission-critical sites. But… I’ll admit, there are a few sites that I built and forgot about.
One in particular came to my attention this week. It was a site I built around a hobby of mine. It needed a WordPress upgrade.
Okay… it had missed a lot of WordPress upgrades.
But worst of all: it had a plugin that was very old and had stopped being updated by its original developer. It was a stats plugin that I really loved back in the days before Jetpack gave us access to WordPress.com stats.
That particular plugin had a vulnerability which was exploited by some nasty malicious hacker.
How I Found Out I’d Been Hacked
This particular site was in one of my longest-standing hosting accounts… one I’ve had since 2006 with 1and1.com. I keep telling myself I’m going to clean that account out and move all the sites, but I just haven’t done it. That’s part of the reason I’ve let some of the sites go unpatched—because why patch ’em if you’re gonna move ’em, right?
Well… somewhere along the line, 1and1 started the practice of sending an email when they encountered something suspicious going on. In the past, they’ve notified my when SPAM emails started going out because of the TimThumb WordPress vulnerability and when their antivirus scanner found malware in a PHP file.
I’ve always been quick to respond when I see one of those, and it happened just a few weeks back. In that case, it just turned out to be an old inaccessible file that I’d renamed after fixing a previous problem.
On Monday of this week, I got another one of these emails:
Anti-virus scan reports: Your 1&1 webspace is currently under attack [Ticket XXXXXX]
Even though I was busy, I jumped right in to see what was happening. They identified a file that had been uploaded to my webspace, and when I saw where it was located, I knew exactly what was going on. That old plugin was still running on the site I mentioned earlier.
So… I logged in via FTP, downloaded a copy of the “malicious file” just so I could see it, and then deleted it and the entire plugin that it got in through.
No big deal.
Or so I thought.
Yesterday, I discovered that all of the sites in that hosting account were down. For most of them, I was getting a simple “Access Denied” error from 1and1 when I tried to load them up in my browser.
A minor panic set in as I went in and tried to discover what was going on.
What I found was very perplexing. The file permissions on the index.php file, the wp-config.php file, and a handful of other files in these sites were changed to 200.
If you aren’t familiar with Linux file permissions, 200 basically means that the file can’t be read by anyone. So… if that file happens to be critical to the running of your site, then… your site doesn’t work.
So… I changed the permissions on a couple of these files in one of the most important sites just to try to get it working. Oddly… within a few minutes of me setting the permissions to 644, they were automatically changing back to 200.
“Hmmmmm…. maybe there’s some malware still running in my account,” I thought to myself.
That’s when I noticed a whole bunch of database “dump” files in the root of my webspace. They looked like this:
So… I replied to the email I’d gotten a few days earlier, and explained what was going on. This updated the “ticket” in 1and1’s Abuse Department so they could have a chance to respond.
After working on things for a few more minutes, I couldn’t stand it any longer. I dialed the 1and1 Support Department (something I truly hate to do) and waited. Within a short time, I was on the line with someone from India who had undergone a significant amount of accent reduction, and explained what was going on. When he was unable to find my ticked ID, I explained that I’d gotten an e-mail. He put 2 and 2 together and connected me with the Abuse Department.
Then… for the first time in the 8 years that I’ve had this account, I spoke to an American. I mean… fluent English. Clearly no foreign accent. And also for the first time, he knew something about what he was talking about!
He reviewed the ticket and was able to explain a little better what had occurred. Hackers had gotten in through unpatched software (which I knew) and had managed to execute shell commands with my account’s user privileges.
Within what must’ve been a very short period of time, they inserted malicious code into approximately 1,500 files in my webspace. This means that they infected even the WordPress sites that were all patched and running the latest versions.
All told, somewhere near 40 sites were infected.
1and1’s systems were automatically changing the file permissions for any infected files to 200 in order to keep anyone from accidentally downloading malware when visiting my sites.
So… then began the painstaking process of removing all the malicious code that had been inserted and bringing the sites back on line one by one.
Late in the evening of Tuesday, December 17th, my Aunt Jane passed away. That moment represented the peaceful end to a valiant battle against cancer that she waged for the last few months of her life. Her daughter, Rachel, had arrived at her bedside just in time to be with her as she took her last breath… something for which I am very grateful.
Although we weren’t “close” for much of my life due to geographical distance, we had—for the last 5 years or so—spent quite a bit more time together thanks to her move to Florida. It was a great tragedy that precipitated her move here… the untimely loss of her husband, Jim, whom she greatly loved. They had made their home for the better part of their two children’s lives in Katy, Texas, just west of Houston. After Jim passed away in December of 2006, Jane felt drawn to Florida to be near to her parents, Rev. Jack Carroll and Erma Carroll, and to her sister (my Mom), Ann Johnson.
When she relocated to Florida, Jane brought with her my cousin Rachel, who was in her sophomore year of high school at the time. Jane’s eldest, my cousin Jay, was studying and playing football at Azusa Pacific University in the greater Los Angeles area, but we still managed to see him more often than ever during the holidays.
I’m very grateful for these past few years. We were all heartbroken at the loss of Jim… and very grateful for the family’s move to Florida. Countless times during Jane’s recent fight with cancer, whether we sat together at my parents’ house (when they were able to care for her there) or by her bedside in one of the three hospitals where she spent so much time, we remarked about how grateful we were that she moved her family to Florida.
Sadly, the time she spent her with her parents was not long. Within a year and a half of her arrival, we lost her father (my Grandfather). Fifteen months later, her mother (my Grandmother) passed away as well. A few short months later, my cousin, Rachel, left for college in Colorado, where she’s currently studying at the prestigious Colorado School of Mines.
Recently, I’ve thought a lot about that sequence of events… and although she didn’t “lose” her children, she did experience being distanced from them in the midst of losing her husband and her mother and father. I think that’s an awful lot of loss for someone to sustain, and my heart was heavy for her.
But… she was great fun to have around here in recent years. She was regularly to be found at Starbucks, where I would interrupt her coffee and reading when I had the chance. We always saw her at family gatherings for birthdays and holidays. She also went out of her way to invite my wife and daughter and I over to use the pool in her community, and we would often grab takeout and spend the evening with her afterward.
During her funeral service last Saturday at Toale Brothers in downtown Sarasota, Jane was remembered by all for her infectious laugh and for her feisty, vocal nature. She never was one to back down from any spirited debate, and it seemed that there were plenty to be had with her around! She was also fiercely loyal and genuine. She would certainly come to the defense of those she loved, and you really didn’t want to be on the wrong side of her when she did!
In addition to the family members who came to remember her and celebrate her life, some of Jane’s close friends from her neighborhood and from playing tennis were also there at her funeral. She managed to forge some really tight-knit friendships in her five years here in Florida.
I have fond memories of Jane from throughout my life. We briefly lived in the same part of Houston during my childhood, and I remember spending some time with her then. Not long after she met and married Jim, they moved to California. By then, my family had already moved away ourselves, so there were a few years there where we didn’t see them as often.
In addition to being spunky and stubborn (which I’m pretty sure came largely from her Mother), Jane was incredibly smart. This made her a force to be reckoned with. It was a badge of honor for me when I beat her (once) at Trivial Pursuit. I remember that moment like it was yesterday. I couldn’t have been more than 12 at the time, and I’m quite certain it was just because I got lucky with the cards that were drawn, but I was no less proud of it!
When her nose wasn’t in a book, Jane was constantly doing crossword puzzles. She loved music, and I remember my amazement at her CD collection when I was a kid.
Most significantly, though, Jane dove into raising Jay and Rachel with all of her might. She was very involved when they were in music and sports, and probably ruined the day of more than a few educators during her time as a Mom. It wasn’t a good idea to treat one of her kids unfairly, that was for sure!
I’m so proud of both Jay and Rachel today. They have each been shaped by the loss of their Father while in their teens. There’s no question that that was difficult—and I’m sure remains so to this day. But each has carried on and pursued and achieved great things since then. They’ve demonstrated great resolve and fortitude as they stood by their Mom during her battle with cancer over the last few months. I was heartbroken for them as they lost their Mom.
Likewise, my Mom has lost both of her parents and now her only sister. My heart breaks for her as well. The same could be said of her brother, my Uncle Steve.
I know that Jane had a very deep faith that kept her connected with God throughout the ups and downs of her life. She was, it seems, especially made to suffer at the hands of some of the “old school” religious ideas that were handed down early in her life. I don’t have a way to know this with any certainty, but I’m guessing that she spent some time being angry with God (not to mention some people) over some of that. But Jane yearned for freedom, heart and soul. As my wife, Jill, and I prayed with her in recent months—often alongside other family members and close friends—we could see Jane’s heart reaching out to God with everything she had.
As I ponder this, I am especially grateful now for her true freedom. She’s free of all the pain and nastiness that was brought on by cancer. But she’s also truly free in her spirit and soul right now. I can only imagine the reunion as she saw Jim and her parents again.
A graveside service will be held on Saturday, December 28th (tomorrow, as I write this) in Katy, Texas at 2pm (local time). Her body will be laid to rest alongside that of her husband, Jim, at the Katy Magnolia Cemetery. Jay and Rachel will be there, and I’m sure some friends from Jane’s years there in Katy will join them.