Building A Portfolio Site With Hugo
Why A Portfolio Site?⌗
A portfolio site is necessary to showcase IT skills. A portfolio site is a great way to distinguish oneself while pursuing new opportunities in a competitive job market. Additionally, the site can potential build new connections with peers.
The Goal⌗
- Import markdown files from Obsidian into Hugo.
- Automate this process with a script.
- Build the portfolio site with Hugo and a provided theme.
- Keep the code in a git repository for versioning control.
- Create a GitHub action so that when Hugo is update and pushed to GitHub a GitHub actions is kicked off to sync the necessary files to S3.
- The files in the S3 Bucket will produce a static website.
Building It All Out⌗
Note on the S3 Bucket and Cloudfront⌗
In a previous project predating the posts to my static website, I already set up a public S3 bucket with Cloudfront and the necessary Route53 settings. In a future version of this post, I intend to update this post with the steps neccessart to set up the public S3 bucket, Cloudfront, and Route53.
Install Hugo⌗
- Go to the Hugo Website: https://gohugo.io/installation/
- I am currently building this on a mac. I used the following command per the site to install Hugo
brew install hugo
- Verify Hugo installed with the following command
hugo version
Creating A Folder For The Portfolio Site⌗
Open the cli and create a folder to store the portfolio site. In my case, I chose “Hugo-Portfolio”
mkdir Hugo-Portfolio
Create The New Hugo Site⌗
In the folder created to store the site run the following command.
hugo new site Hugo-Portfolio
Build The Portfolio Site With Hugo⌗
Installed The re-Terminal Theme For Hugo⌗
- Installed the theme as a git submodule with the following command:
git submodule add -f https://github.com/mirus-ua/hugo-theme-re-terminal.git themes/re-terminal
Copy Contents To Hugo.toml File⌗
From the theme’s site page https://themes.gohugo.io/themes/hugo-theme-re-terminal/#how-to-configure copy the configuration and open the Hugo.toml file with a text editor of choice. Delete all of the text inside and paste in the copied configuration. Save the changes.
Create The Folders for The Site Pages In Hugo⌗
CD
into thecontent
folder- Create folders for
About
,Home
,Projects
, andScripts
folders using themkdir
command. CD
back into the project directory.
Configure The Site Pages⌗
Use text editor of choice to edit the hugo.toml file in site’s directory. At the bottom of the file is text that looks like the below:
[languages.en.menu]
[[languages.en.menu.main]]
identifier = "about"
name = "About"
url = "/about"
Add in the sections needed. I did the following:
[languages.en.menu]
[[languages.en.menu.main]]
identifier = "about"
name = "About"
url = "/about"
[[languages.en.menu.main]]
identifier = "projects"
name = "Projects"
url = "/projects"
[[languages.en.menu.main]]
identifier = "posts"
name = "Posts"
url = "/posts"
[[languages.en.menu.main]]
identifier = "scripts"
name = "Scripts"
url = "/scripts"
In the block above the section for the site’s pages. I changed the block to the following:
[languages]
[languages.en]
languageName = "English"
title = "Carl Kernek"
Near the top of the file, I changed the following:
- showMenuItems to “4”
- contentTypeName to “home”
- themeColor to “blue”
- paginate to “4” Write the changes to the file, save, and exit.
Create Rsync Script⌗
I created an rsync script to make it easier to copy the md files from Obsidian to Hugo.
Note: I did not include my exact file paths for security reasons.
#!/bin/zsh
echo "About to perform the rsync"
rsync -av --delete "/Source/Folder1" "Destination/Folder"
rsync -av --delete "/Source/Folder2" "Destination/Folder"
rsync -av --delete "/Source/Folder3" "Destination/Folder"
rsync -av --delete "/Source/Folder4" "Destination/Folder"
rsync -av --delete "/Source/Folder5" "Destination/Folder"
echo "rsync complete"
exit 0
I saved the file in the Hugo-Portfolio
folder as rsync-notes.sh
Make Changes to Network Chuck’s Images.py Script⌗
Copied Network Chuck’s python script from https://blog.networkchuck.com/posts/my-insane-blog-pipeline/#maclinux-1
Changed the script to the following. The main change is placing the script into a function so that it can be easily reused for the various folders I wish to have content in for the portfolio site.
import os
import re
import shutil
# Paths
posts_dir = 'path to posts dir in Hugo folder'
projects_dir = 'path to projects dir in Hugo folder'
home_dir = 'path to home dir in Hugo folder'
scripts_dir = 'path to scripts dir in Hugo folder'
about_dir = 'path to about scripts dir in Hugo folder'
attachments_dir = 'path to attachments dir in Obsidian'
static_images_dir = 'path to static images dir in Hugo folder'
def images(posts_dir, attachments_dir, static_images_dir):
# Step 1: Process each markdown file in the posts directory
for filename in os.listdir(posts_dir):
if filename.endswith(".md"):
filepath = os.path.join(posts_dir, filename)
with open(filepath, "r") as file:
content = file.read()
# Step 2: Find all image links in the format 
images = re.findall(r'\[\[([^]]*\.png)\]\]', content)
# Step 3: Replace image links and ensure URLs are correctly formatted
for image in images:
# Prepare the Markdown-compatible link with %20 replacing spaces
markdown_image = f"})"
content = content.replace(f"[[{image}]]", markdown_image)
# Step 4: Copy the image to the Hugo static/images directory if it exists
image_source = os.path.join(attachments_dir, image)
if os.path.exists(image_source):
shutil.copy(image_source, static_images_dir)
# Step 5: Write the updated content back to the markdown file
with open(filepath, "w") as file:
file.write(content)
print("Markdown files processed and images copied successfully.")
images(posts_dir, attachments_dir, static_images_dir)
images(projects_dir, attachments_dir, static_images_dir)
images(home_dir, attachments_dir, static_images_dir)
images(scripts_dir, attachments_dir, static_images_dir)
images(about_dir, attachments_dir, static_images_dir)
print("Completed the processing of all Markdown files across all folders and images copied successfully")
Create the requisite folders and files for the GitHub Actions to sync the code to S3⌗
- In the project folder run
mkdir .github
CD
into the.github
directory and runmkdir workflows
CD
into theworkflows
directory.- Copy the below code
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_Bucket: ${{ secrets.AWS_S3_Bucket }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1'
SOURCE_DIR: 'public'
- Run
nano deployToS3.yml
- Paste the code into the file.
- Write the changes
- Save the file
- Exit the file
Keep The Code In A Git Repository⌗
Visit github and create a new repo. I kept mine as private.
I already have my ssh keys set up. This would be the time to set up ssh keys with GitHub if it is not set up.
In the folder containing the Hugo project, run the following commands:
git remote add origin git@github.com:<GitHubUsername>/<HugoSiteRepoName>.git
git add .
git commit -m "<Insert Message Here>"
git push -u origin <Branch Name>
Create The CI/CD Pipeline From GitHub To The S3 Bucket⌗
Creating The GitHub Action⌗
In GitHub create a GitHub Action by:
- Going to the repo containing the Hugo Site.
- Navigate to the
deployToS3.yml
file - Click
Actions
in the menu bar - Click the edit button.
- Search the marketplace for ‘s3 sync’
- Click on the ‘S3 Sync’ option by jakejarvis
Creating The IAM User In AWS⌗
- In AWS go to the IAM console
- Create a new user. In my case I called the user, “github-user”. I left the “Provide user access to the AWS Management Console” unchecked.
- Choose attach policies directly to user
- Select the “AmazonS3FullAccess” permissions and ensure all other permissions are unselected.
- Follow the prompts and “Create User”
- Go to the newly created user in IAM
- Click the “Security Credentials” tab
- Scroll down to the “Access keys” and click “Create access key”
- Click “Other”
- Click “Next”
- Enter in “github access” for the “Description tag value”
- Click “Create Access Key”
- There will be two values. The value for the “Access key” will be used for the
AWS_ACCESS_KEY_ID
and the “Secret access key” will be for theAWS_SECRET_ACCESS_KEY
. - Save both values to a secure location as the secret access key will not be retrievable later.
Providing The Secrets⌗
- Click
Settings
- Click
secrets and variables
- Click
New repsitory secret
- Create a secret for
AWS_S3_Bucket
,AWS_ACCESS_KEY_ID
, andAWS_SECRET_ACCESS_KEY
and provide the corresponding values in the text field while creating. - Save and repeat step 4 for the next secret. For
AWS_S3_Bucket
provide the name of the bucket which can be obtained from the S3 bucket in AWS.
Creating the S3 Bucket Policy⌗
- Go to the S3 console in AWS
- Click on the bucket being used for the website.
- Click the “Permissions” tab
- Click on “Bucket Policy”
- Click “Edit”
- Click on “Policy Generator”
- On “Select Type of Policy” select “S3 Bucket Policy”
- Ensure “Effect” is set to “Allow”
- In the “Principle” field enter the ARN of the user created for github access.
- For “Actions” select “All Actions”. Note: It is ideal to come back to this setting later and narrow down the actions to only the bare minimum needed to perform the actions for security. It is my intention to update these security settings as I continue to work on the project.
- In the “Amazon Resource Name (ARN)” field enter the arn of the S3 bucket.
- Click “Add Statement”
- Click “Generate Policy”
- Copy all of the text that comes up in the pop up window.
- Paste the text into the “Bucket Policy” of the S3 bucket.
- Click “Save Changes”
The CI/CD between GitHub and the S3 bucket should now be complete. Next is to test the CI/CD pipeline between GitHub and the S3 bucket. Some tinkering with the S3 Bucket policies and the GitHub Actions may be necessary to get everything working. However, the next section will go over the areas to watch to ensure the CI/CD pipeline is working correctly.
Testing The CI/CD Pipeline⌗
Updating The Site⌗
- Remove all files and folders from the
public
bycd
into the directory and runningrm -rf *
- Confirm the deletion
cd
into the main project directory.- run the script to rsync the notes
zsh rsync-notes.sh
- Run the python script to update the images
python3 images.py
- Create the static site with Hugo by running
Hugo --gc
- Add the files to git with
git add .
- Commit the changes with
git commit -m "Insert message here"
- Push the changes to the GitHub repo with
git push origin master
- Wait for the GitHub Actions to complete.
- Reload the site
- If the site hasn’t changed then invalidate the Cloudfront.