19
Creating a Medium API for Next.js from an RSS feed
Okay so today's post is actually kind of fun. As you probably don't know, Medium doesn't have a proper fetch API. They only have a publishing API which makes sense as they want to be own the reading experience, not just the publishing. So, if you want to show your Medium posts on your site, you'll need to do a bit of work.
On one of my recent site designs, I wanted to use Medium for the writing experience, but publish the posts / drive traffic to my site, so I needed a way to fetch Medium posts programatically in my Next.js site.
RSS to the rescue (rsscue?)
For this part, we're going to need the following libraries from NPM:
-
rss-parser
: A small library for, you guessed it, parsing RSS feeds from an external URL and turning them into JavaScript objects. -
jsdom
: a pure JavaScript implementation of web standard to emulate a subset of a web browser. We're going to use it for DOM manipulation, handy since some of the RSS response is in HTML. -
date-fns
: A modern, comprehensive toolset for working with JavaScript dates. We'll be using it for date manipulation and parsing.
Let's get into it:
import Parser from "rss-parser";
import { JSDOM } from "jsdom";
type IMediumPost = {
creator: string;
title: string;
link: string;
"content:encoded": string;
guid: string;
isoDate: string;
categories: string[];
};
export async function getFeed() {
const parser = new Parser();
const { items } = await parser.parseURL(
"https://medium.com/feed/@haydenbleasel"
);
return items;
}
export async function getPosts() {
const items: MediumPost[] = await getFeed();
const posts = items.map((item) => {
const content = item["content:encoded"];
const dom = new JSDOM(content);
return {
id: item.guid,
title: item.title,
description: dom.window.document.querySelector("h4").textContent,
date: format(parseISO(item.isoDate), "MMMM d, yyyy"),
image: dom.window.document.querySelector("img") .src.replace("max/1024", "max/3840"),
link: item.link,
tags: item.categories,
content,
};
});
return posts;
}
Couple of notes on the above:
As Medium's RSS response doesn't contain an excerpt, summary or description, we can create our own by pulling text from the first h4
element on the page, which is typically the subtitle. This isn't bulletproof, so make sure you have a consistent title and subtitle in your Medium posts.
Similar thing for the cover image, which we can get by parsing the src
attribute of the first image that appears on the page. By default, Medium serve their images at a maximum width of 1024px
(denoted by the max/1024
param in the URL structure but with simple string replacement, we can bump this up to whatever we'd like.
Now, let's implement this function! I like to start by creating a global interface for the Post in case we need to use it elsewhere:
type IPost = {
id: string;
title: string;
link: string;
description: string;
image: string;
date: string;
content: string;
tags: string[];
}
Now, I like to do a bit of SEO work with next-seo
, but otherwise you won't need any other libraries for this part:
import { BlogJsonLd } from 'next-seo';
import type { GetStaticProps, NextPage } from 'next';
import Image from 'next/image';
import { useRouter } from 'next/router';
import { getPosts } from "../../utils/medium";
type IBlog = {
posts: IPost[];
};
const Blog: NextPage<IBlog> = ({ posts }) => {
const { asPath } = useRouter();
const dates = mediumPosts.map((post) => post.date).sort();
return (
<Layout
title={data.title}
description={data.description}
image={posts[0].image}
settings={settings}
>
<BlogJsonLd
url={`${process.env.NEXT_PUBLIC_SITE_URL}${asPath}`}
title={data.title}
images={posts.map((post) => post.image.url)}
datePublished={new Date(dates[0]).toISOString()}
dateModified={new Date(dates[dates.length - 1]).toISOString()}
authorName="Hayden Bleasel"
description={data.description}
/>
<div className={styles.posts}>
{posts((post, index) => (
<Link href={post.link} className={styles.post} key={post.id}>
<Image
width={index === 0 ? 1128 : 742}
height={index === 0 ? 600 : 395}
alt={post.title}
src={post.image}
objectFit="cover"
priority={index === 0}
/>
<h2>{post.title}</h2>
<p>{post.description}</p>
<small>Posted {post.date}</small>
</Link>
))}
</div>
</Layout>
);
}
export const getStaticProps: GetStaticProps = async () => {
const posts = await getPosts();
return {
props: {
posts,
},
};
}
export default Blog;
But wait, that's not all! Order now and receive dynamic pages that pull the full Medium content into your site!
Rendering a single Medium post as a dynamic page is a little trickier. To emulate a natural rich-text response + some of Medium's handy features, we'll need a handful of libraries, including:
-
medium-zoom
: A JavaScript library for zooming images like Medium. -
highlight.js
: a syntax highlighter written in JavaScript. -
slugify
: handy little library for slug-ifying a string, which we'll use to create a better vanity URL.
Let's start with the layout. It's fairly straightforward, since most of the complexity is going to be in the dynamic HTML we have to render with dangerouslySetInnerHTML
. Still, it's a good opportunity to add some structured data and set up a useEffect hook for Highlight.js and Medium Zoom.
import type { GetStaticProps, GetStaticPaths, NextPage } from 'next';
import { useEffect } from 'react';
import { ArticleJsonLd } from 'next-seo';
import Parser from 'rss-parser';
import { JSDOM } from 'jsdom';
import { useRouter } from 'next/router';
import Image from 'next/image';
import slugify from 'slugify';
import { format, parse } from 'date-fns';
import mediumZoom from 'medium-zoom';
import hljs from 'highlight.js';
import 'highlight.js/styles/atom-one-dark.css';
import { getFeed, getPosts } from "../../utils/medium";
import styles from './Article.module.css';
type IArticle = {
post: IPost;
};
const Article: NextPage<IArticle> = ({ post }) => {
const { asPath, basePath } = useRouter();
useEffect(() => {
const zoom = mediumZoom('figure img');
document.querySelectorAll('pre').forEach((block) => {
hljs.highlightBlock(block);
});
return () => {
zoom.detach();
};
}, []);
return (
<Layout
title={post.title}
description={post.description}
image={post.image}
openGraph={{
type: 'article',
article: {
publishedTime: post.date,
tags: post.tags
}
}}
>
<ArticleJsonLd
url={asPath}
title={post.title}
images={[post.image]}
datePublished={post.date}
authorName={['Hayden Bleasel']}
publisherName="Hayden Bleasel"
publisherLogo={`${basePath}/images/cover.jpg`}
description={post.description}
/>
<h1>{post.title}</h1>
<p>{post.description}</p>
<small>Posted {post.date}</small>
<p>Tagged under {post.tags.join(', ')}</p>
<Image
layout="responsive"
src={post.image}
width={1314}
height={876}
objectFit="cover"
/>
<div
className={styles.content}
dangerouslySetInnerHTML={{ __html: post.content }}
/>
</Layout>
);
};
Using our exported getFeed()
function from earlier, we can use the list of items (posts) to create an array of valid paths for the dynamic page. Rather than using the item ID, I prefer to use a slugified version of the title so the URL is a bit nicer.
export async function getStaticPaths() {
const items = await getFeed();
const paths = items.map((item) => ({
params: {
id: slugify(item.title as string, {
lower: true,
strict: true
})
}
}));
return {
paths,
fallback: false
};
}
export default Article;
Now for the hard part. The content
field returned in the RSS feed is super basic and comes with a slew of issues we'll need to solve for:
- Automatically highlighting Medium's code snippets with
highlight.js
is troublesome as Medium tends to break said snippets into multiplepre
tags for some reason. So we need to merge these tags while preserving the structure. - As mentioned before, all images are typically
1024px
wide by default... let's up that. - The title, subtitle and cover photo are embedded in the main content. If we want to style them differently and use them in metadata, we'll need to seperate them and remove them from the main content.
- Targeting and styling
iframe
s are a bit easier with a wrapper, so we'll need to find alliframes
and wrap them with an easily targetablediv
element. - External links in the content (links that point to content outside your current domain) should have
rel="noopener noreferrer"
attributes and a target of_blank
so it opens in a new tab.
Let's do this.
export async function getStaticProps({ params }) {
const feed = await getFeed();
const id: string = params?.id!;
const post = items.find(
({ title }) =>
id ===
slugify(title as string, {
lower: true,
strict: true
})
);
if (!post) {
return { notFound: true };
}
const content = post['content:encoded'];
const dom = new JSDOM(content);
const description = dom.window.document.querySelector('h4').textContent;
[...dom.window.document.querySelectorAll('img')].map(
(img) => (img.src = img.src.replace('max/1024', 'max/3840'))
);
const image = dom.window.document.querySelector('img').src;
dom.window.document.querySelector('h4').remove();
dom.window.document.querySelector('figure').remove();
[...dom.window.document.querySelectorAll('a')].map((node) => {
if (!node.href.startsWith('https://haydenbleasel.com')) {
node.rel = 'noopener noreferrer';
node.target = '_blank';
}
});
[...dom.window.document.querySelectorAll('body > *')].map((node) => {
const prev = node.previousSibling;
if (prev && prev.nodeName === 'PRE' && node.nodeName === 'PRE') {
prev.innerHTML += `<br /><br />${node.innerHTML}`;
node.remove();
}
});
[...dom.window.document.querySelectorAll('iframe')].map((node) => {
const wrapper = dom.window.document.createElement('div');
const parent = node.parentNode;
wrapper.className = 'iframe-wrapper';
wrapper.innerHTML = node.outerHTML;
parent.replaceChild(wrapper, node);
node.remove();
});
return {
props: {
post: {
title: post.title,
id: slugify(post.title as string, {
lower: true,
strict: true
}),
date: post.isoDate,
content: dom.window.document.querySelector('body').innerHTML,
description,
image,
link: item.link,
tags: post.categories
}
}
};
}
And there you have it! An index and dynamic page that will fetch and parse your Medium articles. Note: if you're going to do this, remember to set Canonical URLs in Medium for each post so you don't get penalised for the same content across multiple domains.
That's it! Now go forth and cross-post your amazing content.
19