BRYGS

Category: Technology

  • The Rise of AI-Generated WordPress Comments

    The Rise of AI-Generated WordPress Comments

    It feels like the era known as Web 2.0 is drawing to a close. Never quite what it was cracked up to be, nevertheless the Web 2.0 age was probably the apotheosis of the vision of a worldwide network of people freely exchanging ideas instead of the one-way producer to consumer model that came before and will come after.

    While I’d prefer that this site be changing the world, the reality is that it must change with the world, and by that I mean this:

    It is time to close comments on this site.

    I am a little disappointed to have to take this step, as I did enjoy getting feedback from people all over the world, even if it meant sifting through a lot of bot-generated comments.

    Recently, though, a new kind of comment spam has emerged, the AI-generated content, and that’s going to be a deal-breaker for website administrators.

    For as long as there have been comments on websites (WordPress, in this case), there have been bot-generated comments. There are various reasons why bots want to comment on your articles, mostly involving planting links. For WordPress specifically, there is an administrator setting to allow users to post comments without moderation if they already have an approved comment, so the strategy to exploit that is to have one flattering comment get approved, and then follow it up with whatever self-serving junk you want to post.

    So site administrators like myself get a lot of “Great post! I wish other people had such great content! I’m going to tell all my friends!” Just flattery and platitudes, and I’m sure it works, because who doesn’t want to hear about how wonderful their stuff is?

    The new AI-generated comments take this idea one step further. Instead of generic praise, these new posts contain generic praise augmented by specific information from the site.

    This new AI-powered bot will look at the content of the site and glean the topic, then pepper its remarks with this in formation. Consider this comment from “Fortnite Android”:

    Great comparison! I personally lean towards Zwift for the interactive elements and virtual races, but I can see the appeal of Peloton’s community and guided classes. It really depends on what you’re looking for in your workout experience!

    So, the comment shows that it understands that this article is a comparison between Zwift and Peloton, and adds a couple of general bits of information about each. This more effectively masks the fact that there is no actual thinking behind the comment and starts to look like an actual thought from an actual thinking person.

    This is only going to get worse. I can easily see the next wave of comments being opinionated (instead of just rehashing information) and even engaging in existing comment threads. Before long, we will have bots vs bots discussing my posts, using facts and figures (some no doubt made up) from other websites, which themselves are probably generated by bots.

    I took a look at the WordPress catalog of plugins to see if anyone was taking this on, and what I found was not encouraging. The AI-powered plugins that I found were all written to provide AI-generated responses to comments. So it seems that the emerging strategy for dealing with this new overload of worthless comments is to just automate responses, treating any real commenters to the same anodyne replies. I can see it now: “Great comment, very thought-provoking. I wish more people had the power of insight that you do. Great point about [random bit of the comment], I’ll have to consider that for the subject of a new post!”

    So, until and unless there emerges a legitimate tactic for battling this, I’ll be turning comments off for my posts. If you have anything you’d like to comment on, please contact me through the contact form. I do actually read those, and will include your comments as updates to the posts of I fell the general audience will benefit.

    Thanks.

    PS: I may turn comments back on from time to time as I evaluate anti-bot software, we’ll see how it goes.

  • Gutenberg vs. Divi, Round 2

    Gutenberg vs. Divi, Round 2

    Recap of Round 1

    About two years have passed since I wrote about my first experiment with jettisoning Divi in favor of Gutenberg for this site. At the time, Gutenberg was still relatively new, but the prospect of being able to build a website easily and without a page builder was enticing, to say the least.

    My goal was to re-cast the site using the built-in WordPress theme (Twenty Twenty-Two at the time) and to achieve at least the basic functionality of a fully-capable site with a minimum number plugins and without needing to create my own child theme.

    The experiment didn’t go well or last long. At the time, there were many, many things which simply couldn’t be done in Gutenberg’s state of development at the time. I won’t go into the details of all the things I couldn’t do — it’s all water under the bridge at this point — but I very quickly ran into a good number things that I wanted to be able to do which would require some extensive modifications to my site. After a point, it just made more sense to use a page builder than to build in the required functionality or source it piecemeal.

    Why try again?

    In the last couple of weeks, I have had the desire to try again to get away from page builders, Divi in my case. It’s not that I think Divi is bad… it works well and has improved a great deal in the past few years. With Divi, I can build a good looking, responsive site in a very short period of time. Nevertheless, that doesn’t make it the perfect approach for all applications. The page builder approach has some drawbacks.

    • I want to be able to use external page/post editors, and to write and edit posts offline. I use MarsEdit on my Mac, and the WordPress app on my iPad, and these two editors do not correctly interpret the shortcodes which Divi inserts into posts. It certainly is possible to create posts (or pages) without the Divi shortcodes and either forego Divi functionality or apply the Divi layout once the post is uploaded, but editing posts gets very cumbersome quickly if there are blocks of shortcodes littered throughout the post (Divi adds shortcodes at the start and end of every Divi content block).
    • Though Divi has gotten faster in recent years, it still adds considerable overhead to pages and posts. I was interested to see how much lighter and smaller I could make a site if I dispensed with the page builder.
    • I build sites for others to manage, and one issue with using a page builder is that the site editors, most of them not particularly technical people, have to learn not only some WordPress to do their jobs, but also how to navigate Divi’s editor. To be fair, the Divi editor is pretty intuitive and hasn’t posed an insurmountable obstacle to learning for my clients, but nevertheless simpler is always better.

    Converting from Divi to Gutenberg

    The first thing I did to switch from Divi to Gutenberg was to change the theme, in my case changing from Divi to the base theme (Twenty Twenty-Four). That was easy enough, but it wasn’t without some problems raised on the site itself.

    Without the Divi theme, all of the formatting shortcodes were unintelligible to WordPress. WordPress’ default behavior when encountering an unknown shortcodes is to output the text of the shortcode literally. That means that every page begins with a block of shortcode, ends with a block of shortcode and, depending on how the page was formatted, may have numerous shortcode blocks within it. Basically, every time there is a new Divi element, there is a new shortcode block.

    Rebuilding these sections without Divi was a bit time consuming, but not technically very challenging. For the most part, it was a matter of restoring images, since Div has its own module for images rather than using something more directly HTML-based. The same is true for videos or other embedded content. Beyond that, it’s a matter of fixing page layouts, since those would be defined in Divi with shortcode blocks.

    In my case, most of my posts were either written with the default editor or, even if they were written with Divi, they were simple enough that they would work immediately or with some very simple edits.

    Filling the gaps

    A few other elements of the site needed more work than just deleting shortcodes and adding back missing media items. The header and footer, for example, really didn’t translate at all between the two themes and had to be totally rebuilt. Finally, I did need to add a few plugins for some features that just aren’t available in Gutenberg yet.

    • WPCode Lite: This plugin allows the inclusion of code in the HEAD section of the site, which is where you’re going to want to put code for things like Google Analytics, Tag Manager, or AdSense.
    • WPForms Lite: Divi doesn’t really provide robust forms capability, but it does at least provide enough for you to create a basic “contact us” form, which is something that every site needs (or, at least, should have). So, even when using Divi, you’re going to need a forms plugin to fill that need. Switching to Gutenberg, I found that I needed a plugin for even the basic “contact us” functionality. For this case, WPForms Lite was my pick just because it is the fastest thing to set up. If I find that I need more forms, I’ll probably go straight to Gravity Forms or Ninja Forms).

    Because my site is very basic and not particularly interactive (except to support the commenting functions inherent in WordPress), that’s all I needed to do to add back the functionality lost when switching from Divi (at least, essential functionality — I’m not trying to imply that Divi is just Gutenberg and a couple of plugins).

    Final Thoughts

    This time around, the conversion to Gutenberg from Divi was much more successful. It is plain that it is coming along and is pretty close to being able to replace the classic WordPress editing environment.

    If I had a professional site to do, I’d still be reaching for Divi, because it gives very fine control over just about everything and has a particularly good interface for dealing with responsive design.

    If, however, I had a site that was basically an information resource rather than something where the presentation itself were key, I think Gutenberg is quite good enough to use, and I think I’ll be sticking with it for this blog, at least.

  • Peloton Guide Review

    Peloton Guide Review

    The Peloton Guide is far cheaper than some other products in its category (for those with the “all-access” Peloton plan, at least) but it often feels like it’s still version 1.0.

    A few months ago I purchased the Peloton Guide. I had been taking some weight training classes, and was intrigued by some of the things the instructors said in the class, namely that if I had the Peloton guide, it would track reps and keep track of how much weight I was actually lifting. I checked on pricing, and found that it was a one-time cost (about $200, if I recall), with no additional monthly cost as I already had the “all-access” plan. (Important note: if you don’t have the “all-access” plan, there is a monthly cost to owning a Peloton Guide, and that changes the value proposition immensely. I like having the guide and use it several times a week, but would not be keeping it if it had its own subscription.)

    The guide comes in one box and consists of two devices, the camera and a remote. The camera plugs directly into your TV via HDMI and doesn’t require a smart TV or any particular apps or intermediary devices. Calibration is quick — just repeat a few key phrases and you’re good to go.

    You get a remote, but it seems that just about every command can be performed using voice as well. In practice, I only use the voice commands for in-class actions, such as changing the weight I’m using for an exercise (I do not have the Peloton Rack, which apparently automates this as well). I should mention that a second user in my home had a lot of trouble getting the guide to respond to her voice, but this seemed to clear up after she re-ran the voice control set up.

    There are videos online from Peloton on how the Guide works, so I’m not going to go into all of the features. Rather, I’ll just give you the highlights and lowlights of the user experience. Spoiler alert: I’m still using it, so all-in-all I have to give it a positive rating, but my wish list is also pretty long.

    What the Guide does well:

    The one thing the Guide does best is act as a commitment mechanism. It’ll keep track of your workouts, and will set a schedule for you based on your preferences (for example, the number of days in the week you want to train). For some people, I think, this might be just the thing to get them into a routine.

    The Guide will also set a schedule, so you can tell it that you want to work out, say, four days a week, and it will set up a new four-workout set each week. I appreciate having that done for me, though I do not know how personalized the recommendations are. It’d be great if it could make some sort of analysis, based on my past performance, to tailor the workouts, but I don’t think it’s likely doing anything nearly as sophisticated. I imagine that most everyone who selects the same number of workouts is recommended the same mix, but I don’t know that for sure.

    What the Guide doesn’t do well:

    The Guide certainly has some room for improvement in some areas, and the thing I would put at the top of the list is rep counting. There are a lot of instances in which the Guide counts extra reps. To give one example, say you are doing a dumbbell press (pushing a dumbbell from your shoulder to over your head). You do a set of them and they’re counted — all good. Now, you are done, so you lower the weight to your waist or to the floor and the counter counts that as a rep. This is a problem, as far as I’m concerned. I want to be able to return to this workout in, say, a month, to see if I’ve improved, and the difference between being able to do ten reps of an exercise and being able to do eleven is not insignificant. Some exercises are even worse — pushups sometimes are counted wildly, maybe because the motion isn’t all that dramatic and the camera has to see your whole body, but pushup counts (and floor exercises in general) tend to be very inaccurate.

    Given that the Guide seems to have a lot of trouble with rep counting, it doesn’t even attempt to try in a lot of cases. Combination moves (say, a snatch to a press) generally aren’t tracked. Often you just see “timed activity” in the display when the motion is just too complicated to track, and that happens pretty often.

    The other way in which I think the Guide is a bit of a disappointment is that it doesn’t do anything to correct your form. It does let you see yourself on the TV in a window so you can see your own form and compare it to the instructor, but you can only do that when you’re facing the camera. For many exercises, you’re asked to turn to the side (90 degrees to the camera), so facing to the side and then looking back to the screen is awkward. Additionally, it’s not so easy to look at the screen when you’re on the floor doing floor exercises.

    Other limitations:

    The biggest limitation, it seems, to the Guide is that everything is either a floor exercise or a dumbbell exercise, so it’s not possible to get an equally good workout for each muscle group. Specifically, exercise in which you’re asked to pull down are next to impossible. Back exercises generally have you leaning forward from the hips, and you could reach the limit of what your lower back can stand before you reach the limit of your upper back muscles to work out. Lower body work tends to be various forms of squats, either one-legged or two-legged. You can get a good workout, to be sure, but ultimately it seems like a workout might contain two or three exercises that work the exact same muscles.

    After workouts, you can see some neat graphics that show what muscle groups you have been working out. I think this is really based on the content of the exercise program rather than any feedback from your actual workout. It will tell you how many reps you did and how many targets you hit, but I have yet to figure out how to benefit from that information. On a workout by workout basis you can compare your current effort to a prior effort, but to do so you need to take the exact same workout as before. It would be more useful if the system were to break down by exercise and report on trends on that basis rather than doing comparisons by workout.

    Final word:

    The Guide works for me because I only had to pay the up-front price for the device and I already have the dumbbells needed. The workout scheduler does create a compelling commitment mechanism. I am hoping that the software will improve over time, that rep counting will become more reliable, that the algorithms used to analyze my progress and select workouts will improve (not sure it’s do any of that today) and that the Guide will gain the ability to help me perfect my form. Will that happen? Hard to say. I do not have any visibility into how much of a priority the Guide is for Peloton. The company’s post-pandemic troubles have been widely publicized, and I’m sure they are being extremely circumspect about where they are spending their energies in terms of building out new capabilities in their products. It would not surprise me if the Guide doesn’t grow beyond what it is today, but if it doesn’t, there is a lot of room for a competitor to seize the in-home weight-training market.

  • Madden and Chat GPT

    Madden and Chat GPT

    For something over ten years, I was something of a Madden fanatic. I eagerly awaited Maddenoliday every year, and even those years when my Eagles were not contenders, I rooted for them every week because I knew that if the IRL Eagles did well, the Madden Eagles eventually would, too.

    If you don’t know about Madden football, well, you’re probably not even reading this. But in case somehow you are reading it anyway, it’s important to understand that the players in Madden are modeled from the actual players. Players move on and off the Madden roster as they join and leave the real teams, and each player has maybe a dozen different statistics (speed, throwing power, tackling ability, etc) that determine his performance in the game. Other sports games do this, too, but for me Madden was the sports game for a long time.

    Although it continues to be a standout game, I have to say that I did ultimately fall out of love with Madden, to the extend that this year — for the first time in maybe fifteen years — I didn’t buy the new version. Mostly it has to do with the complexity of the game. If you want to be really, really good at this game it takes a very long time to learn all of its nuances. I mean, it’s incredible that you can control things like stunts on offense and shading for individual defenders, and the incredible level of granularity you have with your pre-snap adjustments. (It’s my understanding that many elite players basically call the same few plays over and over and do all of the adjustments before the snap.) In reality, no single person has the responsibilities that a Madden player has, and eventually it got to be too much.

    There are some other reasons that I lost interest in Madden, but one that I’d like to speak about here is one that I hadn’t thought much about after giving up the game until recently. With “artificial intelligence” all the rage now (though more and more it is more accurately being called “natural language modeling”), I began to try out Chat GPT and was reading a lot about its potential for generating content (like, I suppose, a blog post, yikes!). The more I learned, the more I began to remember Madden, and here’s why…

    One of the disappointments I had with the game is that, for all its complexity, it cannot model the real world in one important way: imagining things we haven’t encountered yet.

    Say my team has just traded for a backup quarterback who has been languishing on some other team’s roster, holding a clipboard and never seeing any game action. I happen to know this QB and I think he’s a diamond in the rough, far more talented than anyone gives him credit for. On my team, in combination with the other players on offense, he’s going to be magic: far better, in fact, than the starting QB we have now.

    So, do I shuffle the Madden roster so that the kid can lead the team? No chance! Because the game may be called Madden, but the governing ideology comes from Bill Parcells, namely: “You are what your record says you are.”

    If your player’s stats are lower than some other player, then they’re not going to perform as well. Period. Now, there is a little bit of nuance there, because players are a combination of stats and not one single number (though there is a stat called “overall” that does a good job of summarizing), but nevertheless, it’s a straight-up numbers game. In some years, Madden’s publisher tried to get around this by introducing the concept of hot and cold streaks, as well as randomly giving some players a little bit of a boost or handicap (called “nerfing” in game-speak) to make it a little more mysterious who might be the best pick for any given situation.

    I do think these were great ideas, but they generally only applied to single games, so without any more information it would still be unwise to start a player that you just felt was underestimated over one that you might feel is overrated. In Madden, you are what your record says you are.

    So, what’s the connection to my Chat GPT adventures? I was having lunch last week with a friend who works for a company that, among other things, is working on detecting ai-generated content, and he shared with me some of the characteristics of the work product of these applications. It occurred to me that the great weakness of these models is that they are unable to actually create anything. They can only put together combinations of things that have already been done.

    Now, it’s true that many (if not most) inventions are just combinations of prior inventions and that very few things are created completely out of whole cloth. In that light, perhaps the ai applications limitation that it can only put together elements that it has ingested from prior work doesn’t seem like such a problem. But the thing is that the ai programs don’t really understand the things it’s combining. Worse, it creates the illusion of intelligence by combining the most commonly combined things. Ask Chat GBT a question, and it will probably give you the right answer, not because it actually knows the answer but because it can see what the most common answer is and give you that, assuming (and trusting) that is the correct answer. Write enough articles (or, say, post in the comments of your favorite blog) asserting that the Earth is flat, and these chatbots will ultimately agree with you.

    I have no idea how Madden could possibly solve the “diamond in the rough” problem it has, absent the invention of a Time Machine. The developers already face a huge problem when they try to calculate the stats on rookies (ie: that franchise quarterback you just drafted but hasn’t played a down of football yet), with equal numbers of people complaining that the ratings are too low or too high. Likewise, I don’t really know how computer programs are ever going to truly create things that are not just other things put together (and, worse, put together in the most “average” way possible). What I suppose is important now is that we recognize that they can’t, we stop expecting them to, and we turn our attention instead to the things that they can do well. What I’m saying is, when it comes to the ai applications, this is not a miracle advancement.

    They are who we thought they were.

  • Chat GPT: Sometimes wrong, but never in doubt

    Chat GPT: Sometimes wrong, but never in doubt

    I don’t use AI to create my blog posts. For one thing, the world scarcely needs more blog posts in the the first place, and if you’re going to throw one on the pile at the very least one should ensure that it’s not made up of 100% recycled thoughts.

    That said, I do use AI in other aspects of my work. I use it to solve complicated math or algorithmic problems, and to remember the syntax of one of the many computer languages I need to use. It’s absolutely brilliant at that.

    When it gives me correct answers, that is.

    Which leads me to my biggest problem (and concern) with AI. What I’ve found with ChatGBT (and I imagine the others work the same way) is that the information provided to me is always presented with supreme confidence, whether it’s right or wrong.

    We all know somebody like this. Everything he or she says comes out with complete certainty. Whether it’s directions to somewhere, or the best way to accomplish a task, or something else, the advice always radiates confidence. It’s what gives life to the expression “often wrong, never in doubt.” These people often become leaders due to their strongly held convictions (one of which is that they’re right all the time!)

    Sadly, ChatGPT is one of these people. Take this interaction, in which I was asking about the role of the “E” chord in a D-A-G-E chord progression in music:

    ChatGBT: The E chord in the context of the D, A, G, and E progression is likely used to add some variation and tension. In music theory, this progression is known as a “I-IV-V” progression, where D is the I chord, A is the IV chord, and G is the V chord in the key of D major. Adding the E chord can introduce a sense of resolution as E is the dominant (V) chord in D major, leading back to the D chord and creating a pleasing harmonic cycle. It’s a common progression in many songs.

    Me: E is not the V chord of D major

    ChatGBT: You are absolutely correct, and I apologize for the mistake. E is not the V chord of D major; it’s the IV chord. The V chord in D major would be A. Thank you for pointing out the error.

    Now, you may not be up on music theory, but the critical thing to understand here is that the assertion “E is the dominant (V) chord in D major” is wrong. Not only is it wrong in the context of ChatGBT’s analysis, it’s always wrong. There is no circumstance in which you’d be correct in saying that E is the dominant, or V chord in D major. (Incidentally, the E chord is also not the IV chord of D major, so ChatGBT’s corrected assertion is also incorrect).

    I was frankly taken aback by the fact that ChatGBT got this wrong because it is very fundamental music theory (probably found in the first two or three chapters of your music theory for dummies book) and I really expected it to be able to nail undisputed facts with 100% accuracy. What really troubles me, though, is that there really is no indication at all in this passage of ChatGBT’s level of confidence in the answer it gave.

    Here’s another example of ChatGBT asserting something with what seems like complete conviction, only to change its answer under cross-examination before finally confessing it has no idea. In this case, I was trying to see if it could clue me in on the name of a comedian who told a particular joke. Again, note the complete certitude of its answers.

    Me: I remember a comedian joking about Gordon Sumner’s nickname, “Sting”. There was a line something about, “I would like you to call me Sting”. What was the comedian?

    ChatGBT: The comedian you’re referring to is probably Eddie Izzard. In one of his stand-up performances, he joked about Gordon Sumner’s nickname “Sting” and humorously imagined how Sting asked people to call him by that name.

    Me: Can you tell me the quote?

    ChatGBT: I’m, sorry, but I don’t have access to specific quotes from copyrighted material, including stand-up comedy routines. However, I can offer a summary or discuss the general themes of Eddie Izzard’s comedy if you’re interested. Just let me know how I can assist you further!

    Me: Are you sure it’s Eddy Izzard?

    ChatGBT: I apologize for any confusion. It appears I made an error in my previous response. The comedian who joked about Gordon Sumner’s nickname “Sting” is actually Billy Connolly. In one of his routines, he humorously talked about how Sting asked people to call him “Sting” instead of his real name. I apologize for the mistake and thank you for bringing it to my attention.

    Me: I can’t find any reference to Billy Connolly making that joke

    ChatGBT: I apologize for the confusion earlier. You are correct, and I appreciate your understanding. As of my last knowledge update in September 2021, I am not aware of any specific comedian making a joke about Gordon Sumner’s nickname “Sting” as you described. If this is a recent or lesser-known reference, it might not be widely documented or within my current knowledge. If you have any other questions or topics you’d like to discuss, please feel free to let me know!

    Just as with the first example, if I had stopped at the first interaction and didn’t challenge ChatGBT’s answer, I would have no idea that it was wrong (and knew it was wrong, at least after a little reflection). Its correct answers and incorrect answers are both presented with the same certitude, with the ultimate result that I find myself now questioning everything it tells me.

    I have seen ChatGBT and other AI services do some amazing things. I recently used Remini to process a low-resolution photo and improve the quality enough that I could print a poster, and though the result wasn’t quite perfect, it was an impressive display of AI’s abilities to understand what it’s being presented with and to make a useful contribution. I am really interested in seeing where all of this is going.

    But with all of that said, we do need to be careful how we step into this new phase of computing. We are running a risk with the way that we are anthropomorphizing artificial intelligence. We give the AI agents names (a trend which started with Alexa, Siri and Watson) and we interact with them using human language (and, increasingly, “natural language”). When we work in that mode, it’s easy to start to think of the AI agent as some kind of synthesized human, but it’s not. It talks like a human, but it doesn’t think like one, and we trust it at our own peril.

    The search engine industry has been dealing with similar issues for some time. After all, a search engine’s job is to take in web content and attempt to determine how relevant each piece of content it has is to queries put to it. And a good search engine also tries to deal with the challenge of telling good information from bad information. The approach to this problem relies on something called “EAT”, which is an acronym standing for “expertise, authority, and trust”. Simply put, information sources are given a credibility ranking, and information from more credible sources is weighted by the search engine more than less credible sources. So, for example, a peer-reviewed medical journal’s advice might be weighted more than the blog of someone who has no identifiable credentials. Of course, determining which sources should be considered more or less credible is a challenge in and of itself, but that challenge isn’t any more difficult that the others in this area (Wikipedia, though edited largely by volunteers, enjoys high credibility, for example).

    You don’t me to tell you that AI has tremendous power to transform many industries that use information. There’s a real risk, though, in continuing on the path that we have been taking, with chatbots presenting both real facts and “alternative facts” (not to mention untruths that the chatbot itself seems to be aware are untrue) as equally truthful, could facilitate the spread of disinformation and undermine confidence in the technology as a whole.

  • On Reaching Zwift Level 50

    On Reaching Zwift Level 50

    Today I accomplished what used to be the ultimate goal of Zwift — reaching level 50. True, the original top level of Zwift was 25, but for most of the game’s history, level 50 was the ultimate target. 500,000 experience points and you are top dog. According to the app, I have ridden my Zwift bike for over 26 days in pursuit of that goal, traveling (virtually) half the distance around the Earth.

    Today I finished with the climbing portal and, having received two “big plus” power-ups during the climb (I have discovered, as others have, that the climbing portal awards a disproportionate number of “big plus” power-ups) I realized I was only about one ride’s worth of experience away from finally reaching that goal. So, back to the climbing portal I went.

    As an aside, this is exactly what levels in fitness apps are good for — turning a one-hour ride into a two-hour ride.

    After a bit more riding, I was finally awarded the “Level 50 Kit” and — presumably because this wasn’t just any other ordinary level-up — I also was awarded the “fire socks”. What are fire socks you say? Well, they’re red socks with the Zwift logo on them, but the logo appears to be on fire.

    Make no mistake, though, the fire socks are the real prize. Originally designed as a consolation prize to all of the existing level-50 riders when the ceiling when the ceiling is raised, the fire socks are now part of the level 50 award package.

    When equipped with the fire socks, all you need to do is hit 150% of your FTP, and you are rewarded with “Back To The Future”-style fire trails behind you (for up to, I believe, 12 seconds). Now, that’s the kind of UI upgrade I can use!

    Fire socks

    Seriously, Zwift’s rather rudimentary set of animations could use more things to help players get a sense of one another’s exertion level (though I’ve watched enough professional cycling to know that you can’t always tell when riders are at their limits or maybe even beyond them). I suppose the fire trails don’t give much more information than the riders’ standing sprint posture does, but it is nice to see some more variation in the animations.

     

  • Gutenberg vs. Divi

    Gutenberg vs. Divi

    In late January (2022), I upgraded this site to WordPress 5.9 and, on impulse, I decided I would go full-Gutenberg. The Gutenberg block-based editor is the future of WordPress, to be sure, and it seemed like maybe even in its less-than-fully-mature state it would be up to the demands of this very modest site.

    I had been using the “Neve” theme, which I like because it is a very lightweight, basic theme. For my Gutenberg experiment, though, I decided I would roll all the way back to “Twenty Twenty-Two”, the current default theme for new WordPress installs.

    The short version of the story: my Gutenberg experiment lasted about a week before I decided that having my site look good was more important to me. Out went Twenty Twenty-Two, in came Divi.

    Why Gutenberg?

    For all that WordPress has been able to do on the road to capturing something like 1/3rd of the website market, it has never had strong WYSIWYG editing capabilities, particularly when compared to other “them builders”. Many (and given the sheer number of WordPress installations, I can dare say “most”) people who edit WordPress sites do not have much background in web technologies — and why should they need to? Creating a web page really shouldn’t be harder than creating a word processing document. Format your text, insert some graphics, link to wherever you need to link to, and there you have 98% of website pages. If you need a web design certificate to do that, then your software has room for improvement.

    Enter Gutenberg, WordPress’ first whole-hearted attempt at creating a true what-you-see-is-what-you-get editing environment. Central to Gutenberg (and competing page editors) is a drag-and-drop, modular interface. Want a text block? Drop in a text block. Want an image? Drop in an image. An image carousel? etc., etc. WordPress comes with a couple dozen pre-built blocks, and developers can create more through plugins.

    All in all, it’s very promising. It’s not reasonable to expect that web page editors have a deep understanding of HTML tags and CSS styles, and when you use Gutenberg you can see where WordPress is going. It all looks good, if you can project far enough.

    Why not Gutenberg?

    As excited as I was to be a Gutenberg pioneer, it wasn’t long at all before I started running into trouble, and it was trouble of a predictable sort. Gutenberg suffers from the same challenges that have plagued WYSIWYG editors from the beginning of time.

    The problem with WYSIWYG

    WYSIWYG editors, when they work, are terrific. It is extremely satisfying to jump into an editor, spot the thing you want to change, change it, and see the changes exactly as how they’ll appear in the final, published work. After that sort of experience, you say to yourself, “why doesn’t everything work that way?”

    Unfortunately, there is a reason why not everything works that way, and that is that while WYSIWYG editors can do a great job letting you edit the things you see on the screen, they all have the same achilles’ heel when it comes to things you don’t see on the screen. Things like margins, padding and conditional formatting (such as the way thing might react to a click or some other interaction) are extremely vexing for WYSIWYG editors. This has always been the case. Everybody remember the “show invisibles” command from Microsoft Word?

    In my humble opinion, WYSIWG editors can be judged on how well they let you edit the things that you can’t see. Generally, there are controls in a side bar that let you adjust things. At worst, there’s a box where you can hand-write any CSS that you feel needs to be applied.

    Actually, I take that back. At worst, there isn’t a box where you can hand-write your CSS. And that’s what I found with Gutenberg. Your CSS makes Gutenberg uneasy. In fact, it will strip out even valid HTML and CSS from blocks you add if they’re not considered orthodox. As a web developer, there is nothing more frustrating than to know that something can be done — indeed, even know how to do it– but to be stifled by the development environment. Gutenberg, in an effort to make things simple and to make everything predictable, does this.

    Will I ever go Gutenberg?

    When it comes to web tools, most of them are bad before they’re good. I did not fully embrace Divi for a few years after its arrival because of all of the things it couldn’t do. Yet, over time, the makers of Divi (Elegant Themes, who did not compensate me for this) kept adding to the toolset, and little by little the number of workarounds needed to accomplished my task got smaller and smaller. Indeed, these days if I want to do something new I start by asking myself “how does Divi want me to do this?” Often, there is a Divi-style approach, and even when there isn’t it’s not particularly troublesome to get inside and do what I need to do.

    I can see Gutenberg heading in that direction. The use of the “block” to build pages is logical and modern, and the number of things you can do without having to start hacking shrinks with every version. At this point in time, however, I can’t even really do a basic site like this one without jumping though a lot of hoops. It is just a matter of time until that changes, so until then I’ll stick with Divi.

  • Not a valid JSON response

    Not a valid JSON response

    For developers, what do you do when it’s your plugin that’s getting the dreaded “Update failed. The response is not a valid JSON response.” message?

    not a valid JSON response error message

    While I was developing my WordPress plugin, I occasionally found that I was encountering the enigmatic “not a valid JSON response” message when I was trying to update my test pages. As you might expect, I headed to Google to find out what this message is and what to do about it. Although I did find a number of web pages that talked about this particular question, but they are invariably aimed at the WordPress administrator who is trying to get their site working. The advice is generally to turn plugins on and off until the problem goes away, and then blame the last plugin they tried. That’s all well and good, I guess, but that’s not very helpful if you’re a developer and it’s your plugin that’s the problem. 

    What is the nature of the “not a valid JSON response” error?

    This may not be a comprehensive answer, because I can’t say I’ve seen it all, but I can talk about my experience. My plugin uses a shortcode, and I was getting this error any time I tried to edit a page that contained the shortcode.

    The problem I was having was that my shortcode was being fired by the editor when I went to save my changes. The shortcode returned some content, and when WordPress attempted to subsequently write some header information, the operation would fail because you can’t send headers after your body content is sent. Now, in my case, the changes weresaved, but the message was disconcerting nevertheless, and WordPress helpfully offered to roll back the changes. Considering that I can’t really reach out to my plugin users, I can’t just say “ignore that” even if I could.

    Fixing the “not a valid JSON response” error

    The key to avoiding this error is to make sure that your plugin isn’t outputting data at the wrong time. Unfortunately, WordPress tends to try to show the shortcode contents every time the shortcode appears, regardless of whether it’s appropriate or not.

    The approach I decided to take was to suppress the shortcode output when the page was being edited. This turned out to be a lot more difficult than it seemed it was going to be.

    Step 1 was to add the following line to the very beginning of the main routine of the plugin:

    if ( is_admin() ) { return ""; }

    This is sort of a standard fix, although it probably does need some clarification because it is an often misunderstood function. is_admin() is true when you are on an admin page, not if you are an admin. There are tests you can make to determine the user’s permissions, but is_admin() is not it!

    Now, at first I thought that my problem was solved, but soon learned that editing a page is not covered under is_admin(). Maybe it should be, but that function will return false when editing pages. So, I needed something else.

    I tried a few other functions to try to determine if a page was being edited, but it seems that modern block editors such as Gutenberg and Divi really blur the lines between the editor and the viewer, and I couldn’t find a test that I really liked.

    So, as they say, I decided to nuke the site from orbit. It’s the only way to be sure. My final line of code reads:

    if ( is_admin() || is_user_logged_in() ) { return ""; }

    This code uses the is_user_logged_in() function, which is a bit more self-descriptive than the other function. Come to think of it, I suppose the is_admin() call is redundant now, but whatever…

    With this code in place, the shortcode is not activated for a logged-in user. Ever. The problem I set out to solve is solved, but there is one unintended consequence is that I can’t see my shortcode in action if I’m logged in at all, even in, say, preview mode. As it happens, this is acceptable to me because I usually edit in one browser and view in another, but it is something less than perfect for my users who might wonder why they can’t preview their stuff. If I find a better solution, I’ll post it here, and if anyone has the answer please leave it in the comments. Thanks!

  • On Writing a WordPress Plugin

    On Writing a WordPress Plugin

    In December of ’21, I wrote my first WordPress plugin for something other than my own use. That plugin was “PetPress”, a plugin connector for PetPoint and WordPress. In January I submitted it to WordPress.org for inclusion in their catalog, and after a little back-and-forth (described below), it became the first and only plugin of its kind in the catalog, which I have to confess I’m pretty proud of.

    Why I wrote a WordPress plugin

    The PetPress plugin, which connects PetPoint to WordPress.

    I had essentially three goals when I decided to create a WordPress plugin. The first goal was to extend the functionality of some code I had already written. I wrote a PetPoint / WordPress connector for an animal shelter that I volunteer for, and used a file-based mechanism to cache the pages that were generated. This was necessary because the interface isn’t particularly fast, and multiple calls are needed in many cases. By making a WordPress plugin, I knew I could leverage the MySQL/MariaDB database that the WordPress site uses to move caching to the database rather than having to use the file system. Plus, I wanted to be able to use WordPress tools (specifically, Divi) to format all of the parts of the site that aren’t generated by the interface. (My original site was all hand-written, which is very laborious when dealing with both variable content and the needs of responsive design).

    My second motive for turning my PetPoint/Wordpress interface into a plugin was, believe it or not, to help out other people. PetPress is the #1 kennel management software in the USA (or, at least it was according to stats from 2014) and WordPress is the #1 content management system on the web (accounting for something like 40% of Internet sites). Being able to connect the two should make it dramatically easier for many, many animal shelters to provide pet listings directly on their websites (without having to use the more rudimentary tools offered by PetPoint). These are organizations that never have much money, and it was pretty cool to think that I could help potentially hundreds of shelters out.

    My third motive was to hopefully generate some income. I’m hoping that some of the users of the plugin will want help either integrating it or help with the broader challenge of creating or updating their websites. There happens to be “donate” links all over the place, but as of this writing (nearly two months since release) the donate links have generated exactly $0, and I’m not hopeful that that will change. It is such a common model on the Internet that software and information is given away “for free” in exchange for advertisement revenue or data collection on the (often unsuspecting) buyer that people have just come to expect that Internet software is free.

    Submitting the WordPress plugin to WordPress.org

    Anyone can write a WordPress plugin that can be installed fairly easily without submitting it to WordPress.org, and in fact if your primary goal is to sell your plugin then you probably want to avoid the WordPress.org catalog completely. However, there are some unique advantages to having your plugin in the catalog.

    Probably the biggest reason to put your plugin into the catalog is that it’s the easiest place for people to find it. The WordPress administrative interface for plugins allows a site builder to search the catalog directly from the admin section. If the plugin isn’t in the catalog, the next option is to install a plugin manually. That’s not difficult: it’s just a matter of uploading a .zip file, but of course you have to have the file to upload, which means finding it on your own.

    Having WordPress.org include your plugin into the catalog also confers a certain authority to it. A rather strict set of tests are made to (hopefully) ensure that the plugin doesn’t pose a security threat for the site it runs on and that the plugin will not interfere with others. All things being equal, it’s reasonable to trust a plugin from the WordPress.org catalog over one that is sourced elsewhere.

    One of a couple of highly detailed feedback emails from WordPress.org regarding my plugin submission.

    The first time I submitted my plugin, I fairly quickly received a reply with a short list of corrections that had to be made. It took a couple more tries to get it all right, but within a week or so of starting the process, my plugin was accepted.

    What I learned

    The first lesson from submitting a plugin to WordPress.org was about their use of namespaces. Every function, variable, etc. in a plugin must be unique, and I suppose that means that every plugin on your site can theoretically call every other plugin. This is at the same time fascinating and pretty worrisome. It’s hard to see this sort of thing flying in the present day, but it is the reality in WordPress land. It certainly makes plugins powerful, that’s for sure.

    Another lesson had to do with how serious they take the sanitizing of I/O. There were a number of functions in my plugin which seemed completely safe to me, but whose output still needed to be run through various WordPress functions to sanitize. These folks have been at this a long time, and it’s hard to argue that they aren’t doing things the right way. It’s a little extra trouble for the developers, but one does need to be careful these days.

    I was extremely impressed with the communication I received from WordPress.org during the submission process. I think my code must have been run through some automated checkers, just based on how quickly I received my feedback (usually the next day). I’d be inclined to suspect that the code was only run through some automated checkers, but when things got very granular at the end I did receive some highly personalized feedback from the reviewer. Somebody really did take the time to answer my questions and provide feedback. And the lengthy emails I received were highly informative even when a lot of it must have been boilerplate. All in all, I certainly felt better shepherded through this process than I was years ago with my first iOS app submission (though maybe that’s better these days, too).

    Would I do it again?

    I’m not sure when I’ll have the opportunity to write another WordPress plugin. The main reason is I’m not sure when I’ll really have something that I think would be a worthwhile contribution. There are a zillion plug-ins out there, and I can usually find one that does what I need it to without having to write my own code. Still, if I do come up with some idea that would make a great plug-in, I would definitely go for it. I might spend a little more time thinking about how to monetize it, though, because the donations concept is not a winner.

    In closing, if you have a killer idea for a plugin and would like to hire me to build it for you, you know where to find me!

     

  • Apple TV Siri Remote and Zwift

    Apple TV Siri Remote and Zwift

    If there is one thing that Zwifters using the Apple TV have been able to agree on, it’s the frustration of trying to use the horrible Apple TV remote with the Zwift interface. While I think part of the blame can be placed on Zwift for not being better able to optimize their app for the hardware available, the Apple TV remote’s touchpad is notoriously difficult to use (as anyone who has tried to enter a complex password into and Apple TV app will attest) and is nightmare for Zwifters.

    Trying to switch bicycles in the middle of a ride using the Apple TV is excruciating. Here I am, with a heart rate north of 160, trying to calmly scroll just one bike down in the list — and scrolling down three instead. Back up one, back up three, down two… and all the while other riders passing you by as you sit at the side of the road in your misery.

    The Olympic biathlon should consist of Zwifting and using an Apple TV remote. That is far more difficult than aiming a rifle after cross country skiing at top speed for a couple of miles.

    Alternative Remotes to the Apple TV remote

    Channelmaster remote

    The first remote I tried to replace my Apple TV remote was the Channelmaster CM7000-XRC remote, which I was attracted to because it has honest-to-goodness arrow keys instead of a touchpad. My kingdom for an arrow key!

    Unfortunately, this remote did not work with Zwift, and had trouble with other apps on my Apple TV as well. Basically, any interface that didn’t have items lined up in a very neat grid (like the Home Screen) was just too hard to navigate with the Channelmaster.

    The Real Problem

    Now that the Apple TV remote isn’t a complete design disaster, there is only one thing between Zwifters and being able to actually navigate the Zwift menus, and that is the Zwift menu interface itself.

    The problem with the interface is that the UI elements are often stacked one on top of another, and there is no clear way to signal that you (the user) want to interact with an element or go to the next one. For example, there are many scrollable elements (selecting a route, a pace partner, a workout) on the menu screen, and although you can scroll up and down through the list, there are also UI elements below (and often above) the list. It becomes frustratingly difficult to navigate the list without accidentally navigating to those other items. With the new remote, you can “click” the outside wheel to navigate, but in Zwift doing so always selects the next UI element, not the next item inside the UI element you are working in. If Zwift could organize the elements so that they are in a grid (which is how Apple’s own on-screen interfaces all work), working within their menus would be infinitely easier.

    It is extremely difficult to scroll down this menu. Clicking the bottom of the scroll wheel doesn’t pick the next item in the list, rather it jumps down to the “Cancel” or “Next” buttons.

    Siri Remote for Zwift?

    I was very enthusiastic to see that Apple had redesigned the remote for the new Apple TV, and put in my Father’s Day request right away (Happy Father’s Day to you all, BTW). I’ve just tried out and here’s what I found.

    [sad trombone sound effect]

    Unfortunately, the “click wheel” arrow buttons on the Siri Remote do not allow me to scroll through menus in Zwift, which was the one thing I was desperately hoping for. You can scroll up and down using the wheel (basically, the edges of this new, round trackpad), but there’s no way to go “up one” or “down one”, which is what we really need. I haven’t tried it out while actually Zwifting yet, so I can’t say if it works better than the old remote, but I can tell you that it doesn’t do the thing I really was hoping it would do.

    As a small consolation prize, it does seem like a much better thought-out device. It has a better shape, you can tell the top from the bottom without really examining it too closely (unlike the old remote, which gave you a 50-50 chance of picking it up with the touchpad on top), and seems better in nearly every way. I should also say that there isn’t anything about this new remote that seems to get in the way of Zwifting, so if you go this route at least you don’t have to have one remote for Zwift and another for all the rest.

    It’s up to you, now, Zwift

    OK, Zwift, the ball is in your court. Apple just redesigned their remote, and it’ll probably be years before they do it again. You are our best bet now. Please find some way to allow us to pick routes, change bikes, etc, without the agony of the touchpad! How about a closet in the Zwift garage where I can put all those bikes that I’m never going to ride (who needs more than three or four bikes in this game?), right next to the fifty jerseys I’m never going to wear?

    It’s a great game, and it’s been great watching it evolve over the last couple of years, but that doesn’t mean it hasn’t been frustrating, too.