Posts tagged code

Troubleshooting Universal Links for Non-App Developers

Universal links are Apple's way of directing a link to an app (if installed) or website. Setting them up requires updates to both your app and website. On the app side, you'll need to add your site's domain to the associated domain entitlements while your website will need an Apple App Site Association file. If, like me, you've exclusively done web work, some of the documentation and tools may feel a little dense or unfamiliar. For example, Apple's technotes on Debugging Universal Links is a great resource but assumes some prerequisite app development knowledge. So, this is a guide for troubleshooting universal links designed specifically for people without that experience.

General Gotchas

  • Links will only open in app when tapped; they cannot be copied and pasted into the URL bar.
  • The link can't be rendered on the domain you added to the app's associated domains. So basically, if you want to open a link to https://yourdomain.com in app, the link shouldn't be on the page https://yourdomain.com/test-universal-links. As a workaround, I used Replit to publish quick pages to share on mobile. You can also paste links into your Notes app for personal testing.
  • Instead of pulling the apple-app-site-association file directly from your web server, Apple will serve it from their own CDN. I generally saw updates in about a day but your mileage may vary. If you want to check their cache, you should be able to hit https://app-site-association.cdn-apple.com/a/v1/yourdomain.com.
  • The Apple docs claim you can bypass the CDN by adding ?mode=developer to the app's associated domains. However, in my experience, backed up by a few users in the Developer Forums, including the param broke universal linking entirely. (It's very possible that I just missed a step so, if you know what I did wrong, please let me know in the comments.)

Tools

AASA Validator | Branch - This validator will run a few checks on your apple-app-site-association file but it's especially reassuring to know that the file contains valid JSON and includes the correct content-type header.

Potential Gotchas

  • The file Branch validates comes from your web server so it may not match what Apple has saved in its CDN.

swcutil - swcutil is a command line tool that comes built-in on a Mac. You can run swcutil -h to view a list of options but the ones I found most helpful were:

  • swcutil dl -d yourdomain.com - View the contents of an apple-app-site-association file from a specific domain.
  • swcutil verify -d yourdomain.com -j ./apple-app-site-association.json -u https://yourdomain.com/page - Verify that a URL matches a pattern in an apple-app-site-association file. This checks against a JSON file on your computer (see the second parameter) so make sure to download it and point to the correct location.

Potential Gotchas

  • This isn't swcutil specific but, if you're testing a link with a query parameter, make sure to escape the question mark or you'll get a zsh: no matches found error.

iPhone Developer - If you have the app installed on an iPhone, you should be able to use the Universal Links Diagnostics option in Settings > Developer. Enter a URL and the tool will check whether that page would open in an installed app. It also provides the app ID.

Potential Gotchas

  • To use the Developer tools, you'll need to enable Developer Mode on your iPhone and that's trickier than expected. The toggle switch should be under Settings > Privacy & Security but it doesn't appear by default. To see the switch, you'll need to connect your phone to your computer and either a) open a project in Xcode or b) use a program like iCareFone.

Hope these help save some time and frustration!

Package Publishing Reading & Resources

I have big plans in the works to overhaul a few projects and, to prep for that, I’ve been doing a lot of reading up on different approaches to publishing JavaScript packages. These are a few resources I’ve found particularly useful and informative.

  • How to Publish an Updated Version of an npm Package – Cloud Four - I’ve used different tools to automate releases in the past but had no idea what they were doing under the hood. This article walks through creating a release and publishing to npm and Git with detailed explanations at every step. It’s a great starting point since understanding the manual process makes evaluating different automated strategies a lot easier.
  • Automate npm publishing with GitHub Actions, proper changelog, and release notes - I'm looking for a solution that includes independently versioned monorepos so this guide isn't a perfect fit. However, using GitHub Actions to manually trigger a release and enter the correct version bump (major, minor, patch, etc.) is a really clever approach.
  • Release Workflow | Yarn - Package Manager - Yarn's release workflow for monorepos is an experimental feature so I’m holding off for now but I hope it pans out. The section on deferred versioning and record keeping is especially intriguing.
  • Tools! Trying to figure out which of these options would best serve my needs:
    • semantic-release - My current tool of choice. semantic-release depends on commit messages that follow Angular's commit message conventions for versioning. Unfortunately, it doesn't play nice with monorepos (for more details, here's a little extra credit reading The chronicles of semantic-release and monorepos).
    • Auto - Intuit - Automates releases based on pull request labels. I used this at a previous job and appreciated that it didn't require linting commit messages or any extra effort from contributors. The downside, Lerna is a must for use with monorepos.
    • Release It! - This seems promising. A CLI tool that can be used in interactive or continuous integration mode. The big appeal for me is a Yarn workspaces specific plugin.

Finally, recommendations are welcome so here’s my brief. I want to combine multiple packages that currently live in separate repos into a single monorepo using Yarn workspaces. Ideally, I’d like to independently version the packages without adding Lerna and use GitHub Actions for CI/CD. If you have a similar setup, let me know what's worked for you.

Next.js and Tumblr as a CMS Part 4: Open Graph Images

Way back in my first post on using Next.js with Tumblr, I mentioned getting more control over my blog as one of the big motivations for the switch. So, I thought I’d wrap up by going over a couple of the specific things I meant by that: generating Open Graph images and adding syntax highlight to code blocks.

This was originally intended as one post but it got a little long and I’m a little slow so I’ll start with Open Graph images.

Open Graph images are the preview images you’ve probably seen when sharing a link on a social media site like Twitter or Facebook. By default, Tumblr will display a generic image with the Tumblr logo and some themes might pull in your avatar or let you upload a custom image. However, there isn’t a good way to attach different images to different posts or to dynamically generate them. My goal was for each of my posts to have a unique, text-based image displaying its title or description and type.

Sample Open Graph sharing image

There are a lot of good articles on the topic but their instructions didn’t get me exactly what I wanted so I wound up picking and choosing to cobble my solution together. Dynamic Open Graph images with Next.js was so close but it uses next-api-og-image which uses chrome-aws-lambda under the hood and I couldn’t get it to work on Vercel. I even tried the suggestion to install an older version of chrome-aws-lambda but it just wouldn’t deploy. Generate Open Graph images for your static Next.js site generated images during the build process instead of on the fly but it introduced using Playwright which was invaluable. Those two articles were big influences on the code below.

Start by installing the necessary playwright packages:

yarn add playwright playwright-core playwright-aws-lambda

Next, in your post component, add the meta tag inside the next/head block:

<Head>
  <meta property="og:image" content={`${process.env.NEXT_PUBLIC_BASE_URL}/api/og-image?headline=${post.headline || post.summary}&type=${post.type}`} />
  ...
</Head>

Two things to note: 1) for convenience, I store my base URL as an environmental variable so you’ll probable need to replace NEXT_PUBLIC_BASE_URL and 2) we’ll be using query params to pass the post headline and type.

The content URL points to a route we’ll create in pages/api/og-image.js. I’ve truncated a lot of the HTML and CSS since those will depend on how you want your image to look:

const playwright = require('playwright-aws-lambda');

export default async function handler (req, res) {
  const html = `
    <html>
      <head>
        <meta charset="UTF-8">
        
        <style>
          *, *:after, *:before {
            box-spacing: border-box;
          }

          html {
            font: 8px 'museo-sans-rounded', sans-serif;
            line-height: 1.4;
          }

          body {
            background: #f4f4f4;
            margin: 0;
            padding: 2rem;
          }
          
          ...
        </style>
      </head>

      <body>
        <div class="og">
          ...

          <div class="og__type">
            <span>laurenashpole.com</span>  —  ${(req.query || {}).type || ''} post
          </div>

          <h1 class="og__headline">${(req.query || {}).headline || ''}</h1>
        </div>        
      </body>
    </html>
  `;

  if (process.env.NODE_ENV === 'development') {
    res.setHeader('Content-Type', 'text/html');
    return res.end(html);
  }

  const browser = await playwright.launchChromium({ headless: true });
  const page = await browser.newPage();
  await page.setViewportSize({ width: 1200, height: 630 });
  await page.goto('about:blank');
  await page.setContent(html, { waitUntil: 'networkidle' });
  const img = await page.screenshot({ type: 'png' });
  await browser.close();

  res.setHeader('Cache-Control', 's-maxage=31536000, stale-while-revalidate');
  res.setHeader('Content-Type', 'image/png');
  res.end(img);
}

Now, if you visit that URL while developing locally, you should see an HTML page so you can inspect and tweak your designs. In production, Playwright will launch a headless browser, open a new page and insert your HTML, and then take and return a PNG screenshot.

And that’s how I set up my Open Graph images. Next time I’ll actually finish the series with syntax highlighting for code blocks.

Next.js and Tumblr as a CMS Part 3: Sitemap and RSS

If you’ve been following along with my series on using Tumblr as a Next.js CMS, last time we looked at fetching data and re-creating all the pages you’d expect to find on your average blog. That tutorial skipped two items that Tumblr normally handles: the sitemap and RSS feed. They require a little extra attention so in this post we’ll build on the earlier code to generate those files.

Sitemap

The basic Tumblr sitemap contains links to your homepage and each individual post. You can also add any other URLs you want to include (I like to throw in a list of featured tags) but for now we’ll focus on posts.

In the tumblr.js file from the last entry, reuse the CLIENT and getPosts code to create a method to return all posts:

export async function findAll (limit = 50) {
  const client = tumblr.createClient(CLIENT);
  const initialResponse = await getPosts(client, limit, 0);
  const totalPages = Math.floor(initialResponse.total_posts / limit);

  const posts = await [...Array(totalPages).keys()].reduce(async (arr, i) => {
    const response = await getPosts(client, limit * (i + 1));
    return [ ...(await arr), ...response.posts ];
  }, []);

  return { ...initialResponse, posts: [ ...initialResponse.posts, ...posts ] };
}

The Tumblr API documentation isn’t super clear when it comes to the number of posts you can retrieve in one request. It implies that 20 is the maximum but I haven’t found that to be the case in practice so, to make updating easier just in case, the findAll method accepts limit as a parameter.

Next, in the pages directory, add a new file called sitemap.xml.js. The file will return an empty React component and most of the action takes place in getServerSideProps where we’ll get the posts, use them to generate an XML string, and then switch the Content-Type to XML with setHeader.

import { findAll } from '../utils/tumblr';

const Sitemap = () => {};

export async function getServerSideProps ({ res }) {
  const response = await findAll();

  const sitemap = `<?xml version="1.0" encoding="UTF-8"?>
    <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 <a href="http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd</a>">
      <url>
        <loc>${process.env.NEXT_PUBLIC_BASE_URL}</loc>
        <changefreq>weekly</changefreq>
        <lastmod>${new Date().toISOString().substring(0, 10)}</lastmod>
      </url>

      ${response.posts.map((post) => `
        <url>
          <loc>${process.env.NEXT_PUBLIC_BASE_URL}${new URL(post.post_url).pathname}</loc>
          <lastmod>${new Date(post.date).toISOString().substring(0, 10)}</lastmod>
        </url>
      `).join('')}
    </urlset>
  `;

  res.setHeader('Content-Type', 'text/xml');
  res.write(sitemap);
  res.end();

  return {
    props: {}
  };
}

One thing to note, I store my blog’s base URL as an environmental variable so you’ll need to either replace NEXT_PUBLIC_BASE_URL above or add it to your .env files to get this working.

RSS

For the RSS feed, we’ll be using the Feed package to handle the data formatting. You can install it with:

yarn add feed

Create a directory called rss inside pages and then add an index.js file inside of it. By default, the Tumblr feed shows the ten most recent posts so we can use the same find method we used before for pages.

The file for the RSS feed will look pretty similar to the sitemap. The main difference is that instead of using string interpolation to generate XML we’ll follow the example from the Feed docs:

import { Feed } from 'feed';
import { find } from '../../utils/tumblr';

const Rss = () => {};

export async function getServerSideProps ({ res }) {
  const response = await find();

  const feed = new Feed({
    title: 'Your Tumblr Title',
    description: 'Your Tumblr description.',
    link: process.env.NEXT_PUBLIC_BASE_URL
  });

  response.posts.forEach((post) => {
    feed.addItem({
      title: post.title || post.summary,
      description: `${post.type === 'photo' ? post.photos.map((photo) => `<img src=${photo.original_size.url} /><br /><br />`) : ''}${post.type === 'video' ? `<iframe width="700" height="383" src="https://www.youtube.com/embed/${post.video.youtube.video_id}" frameborder="0" /><br /><br />` : ''}${post.type === 'link' ? `<a href=${post.url}>${post.title}</a>: ` : ''}${post.type === 'answer' ? `${post.question}: ` : ''}${post.trail[0].content_raw}`,
      link: `${process.env.NEXT_PUBLIC_BASE_URL}${new URL(post.post_url).pathname}`,
      pubDate: new Date(post.date).toISOString().substring(0, 10),
      category: post.tags.map((tag) => { return { name: tag }; })
    });
  });

  res.setHeader('Content-Type', 'text/xml');
  res.write(feed.rss2());
  res.end();

  return {
    props: {}
  };
}

export default Rss;

To match the Tumblr provided feed, I added some post type specific intros before the raw content in the description text.


That should cover sitemaps and RSS. My next post will dig into some of the bells and whistles that inspired the move to a separate app in the first place like code syntax highlighting and custom social sharing images.

Advertisement
Advertisement

Next.js and Tumblr as a CMS Part 2: Data Fetching

Welp, I thought I could start a series of posts, get married shortly after, and not wind up with a huge delay between them. That was overoptimistic. But at long last, here’s my second post on using Tumblr with Next.js. Check out the first one for background on why Tumblr and thoughts on organizing repos or keep reading for pointers on fetching data (with a healthy smattering of mistakes I made that you should avoid).

The Pages

Assuming you’ve already bootstrapped a Next.js app (if you haven’t, follow the instructions here), the first step is figuring out which pages you’ll need to re-create to match your existing Tumblr. For a basic blog, your pages directory should look something like this:

pages:
  ├── page:
  │   └── [page].js
  ├── post:
  │   └── [...id].js
  ├── tagged:
  │   ├── [tag]:
  │   │   └── page:
  │   │       └── [page].js
  │   └── [tag].js
  ├── 404.js
  └── index.js

If your Tumblr has custom pages, you’ll add those directly under /pages. One thing to keep in mind, the Tumblr API doesn’t support saving user questions or submissions so Next.js might not be the best solution if you need that functionality.

Connecting to Tumblr

We’ll be using the tumblr.js NPM package to connect to the Tumblr API. Install it with:

yarn add tumblr.js

For authentication, you’ll need to get an OAuth consumer key, consumer secret, token, and token secret. Head over to Tumblr and register a new application to get your consumer key and consumer secret. Don’t stress over that form too much, you can ignore any non-required fields.

Once you have those values, enter them in the Tumblr API console and click authenticate to get your tokens. Then copy all four and save them to an .env.local file for use in your Next.js app:

TUMBLR_CONSUMER_KEY=****
TUMBLR_CONSUMER_SECRET=****
TUMBLR_TOKEN=****
TUMBLR_TOKEN_SECRET=****

If you’re wondering why I recommend this method instead of using fetch with an API key, that was one of my early mistakes. Everything was fine until I finished the new site and updated my blog’s visibility (hiding it outside of the Tumblr dashboard). At that point, all my requests started failing because I no longer had the proper permissions. This approach should work even if you change your settings.

Fetching Data

Now to write some actual code!

Start by creating a file called tumblr.js. I keep mine in a utils folder but you can put it wherever you’d like. This file will export a find method that uses the package we installed and the secrets we saved in the last step to create a Tumblr client and request blog posts. Here’s a basic example (swap out “yourtumblr” for your Tumblr):

import tumblr from 'tumblr.js';

const CLIENT = {
  consumer_key: process.env.TUMBLR_CONSUMER_KEY,
  consumer_secret: process.env.TUMBLR_CONSUMER_SECRET,
  token: process.env.TUMBLR_TOKEN,
  token_secret: process.env.TUMBLR_TOKEN_SECRET,
  returnPromises: true
};

export async function find (limit = 10, page = 1, id, tag) {
  const client = tumblr.createClient(CLIENT);
  return await getPosts(client, limit, limit * (page - 1), id, tag);
}

function getPosts (client, limit, offset, id, tag) {
  return new Promise ((resolve) => {
    client.blogPosts('yourtumblr.tumblr.com', { limit, offset, id, tag })
      .then((response) => resolve(response))
      .catch(() => resolve({}));
  });
}

The find method accepts limit, page number, post ID, and tag params so it can be reused across all the Next.js pages listed above.

Static Props & Paths

Using the same find method means writing similar getStaticProps and getStaticPaths blocks for every page so I’ll just run through index.js and post/[...id].js here. If you want more examples, check out the pages directory in my blog repo (although I do a little additional post parsing that I haven’t mentioned yet).

The index.js page uses the default params and is fairly simple:

import { find } from '../utils/tumblr';

...

export async function getStaticProps () {
  const response = await find();

  return {
    props: response,
    revalidate: 3600
  };
}

Things get a little more complicated in post/[...id].js. We’ll be using incremental static regeneration to avoid hitting Tumblr’s 300 API calls per minute rate limit during the build process. (My second early mistake: when I first started, ISG was brand new so I tried other workarounds for limiting requests like adapting this method for caching data globally Although it technically worked, it caused a lot of extra deploys and stale data.)

In the code below, I’m statically generating the most recent 20 posts on my blog and revalidating after an hour (3600 seconds):

import { find } from '../../utils/tumblr'; 

...

export async function getStaticPaths () {
  const response = await find(20);

  return {
    paths: response.posts.map((post) => {
      const params = new URL(post.post_url).pathname.replace('/post/', '').split('/');
      return { params: { id: [ params[0] , params[1] || '' ] }};
    }),
    fallback: 'blocking'
  };
}

export async function getStaticProps ({ params }) {
  const response = await find(1, 1, params.id[0]);

  if (!(response.posts || [])[0]) {
    return { notFound: true };
  }

  return {
    props: {
      post: response.posts[0]
    },
    revalidate: 3600
  };
}

You can play around with the number of posts per page and the time between revalidations to see what works for you. If Tumblr isn’t able to find any posts with the current ID, the 404 page renders instead.

That’s it for now! Up next, sitemaps and RSS.

Next.js and Tumblr as a CMS

Thought I would do a series of posts on moving my blog from a Tumblr-hosted and themed site to a Tumblr powered Next.js app.

You might reasonably ask, “Why would anyone want to do that when there are so many actual CMS options available?”.

In my case, it’s mostly sentimental. Way back when I was deciding whether to switch careers and try coding full time, I took a lot of freelance jobs converting PSDs to Tumblr themes and I’ve been using it ever since.

But, I finally wanted a little more control over my blog and a good excuse to experiment more with Next.js so I decided to mix it up just a little. Let’s dig into some of the decision making, adventures in data fetching, and general idiosyncrasies that come with using Tumblr as a CMS.


How To Set It Up

Starting at the beginning, the very first question was how to set everything up. Where should the site and the code live? How do you share the parts that need to be shared? It seems simple but there are a lot of options.

Keeping with the original Tumblr setup, I decided to create a new app for my blog instead of trying to integrate it with the rest of my site (which was already using Next.js). I wanted to use Vercel for hosting which meant I would just need to update my subdomain records once everything was up and running.

Since Tumblr templates are basically HTML files that include Tumblr’s custom blocks and variables, I’d never really shared much code between my blog and the rest of my site even though they look pretty similar. I had a small script to generate stylesheets but that was it. Using Next.js for both opened up the possibility of sharing React components, variables, and utilities in addition to CSS.

To figure out how to do that, I followed one article (6 Ways to Share React Components in 2020) to another (4 Git Submodules Alternatives You Should Know) and finally landed at Git subtree: the alternative to Git submodule for step-by-step instructions.

Any solution that required publishing components, individually or as a monorepo, seemed like overkill for a smallish, one dev project. Git subtrees provided a way to nest one repo as a subdirectory in others which was exactly what I wanted. I split out shared components like the header, footer, form elements, etc. into a separate repo, ran the commands from the tutorial above in my blog and site repos to hook it all up, and voila. No copying and pasting duplicate code between sections and no publishing to NPM or other third-parties.

So that’s the foundation, stay tuned for the next installment for data fetching and some actual code!

React Inner Image Zoom Version 3.0.0

React Inner Image Zoom version 3.0.0 went out earlier this week with a handful of bug fixes, test and build improvements, and one major change.

What’s the big thing to look out for? By popular demand, the imgAttributes prop was added to pass down (almost) any valid React img attributes in a single object instead of as individual props. That means scrSet, sizes, alt, and title are gone but in exchange you get all the data attributes and event handlers you could want. I haven’t submitted updated type definitions to DefinitelyTyped yet but I’ll try to get that done in the next few days.

This release also included a handy new Changelog so I would be remiss not include the official record here:

Changed

  • Replaced srcSet, sizes, alt, and title props with imgAttributes to set the original image’s attributes.
  • Show close button when moveType is set to “drag” on all breakpoints.
  • Switched from setTimeout to onTransitionEnd to check that zoomed image has finished fading out.

Added

  • This handy CHANGELOG.

Fixed

  • Added stopPropagation on touchmove to prevent events below fullscreen modal.

If you run into any bugs, please let me know in the GitHub issues.

Vue Inner Image Zoom v2

As promised, now that migrating is easier I’ve updated my Vue Inner Image Zoom component to support Vue 3. If you’re still on Vue 2 and want to use the component, just make sure to install it as vue-inner-image-zoom@1.1.1.

I also updated the demos site to remove the lazy loading example since vue-lazyload isn’t compatible with Vue 3 (I’m open to any suggestions for replacements) and switched from vue-slick-carousel to Swiper both for compatibility and because it’s my preferred carousel library.

If I broke anything and you run into any new bugs, please report them on the GitHub issues page.