Why A Portfolio Site?

A portfolio site is necessary to showcase IT skills. A portfolio site is a great way to distinguish oneself while pursuing new opportunities in a competitive job market. Additionally, the site can potential build new connections with peers.

The Goal

  • Import markdown files from Obsidian into Hugo.
    • Automate this process with a script.
  • Build the portfolio site with Hugo and a provided theme.
  • Keep the code in a git repository for versioning control.
  • Create a GitHub action so that when Hugo is update and pushed to GitHub a GitHub actions is kicked off to sync the necessary files to S3.
  • The files in the S3 Bucket will produce a static website.

Building It All Out

Note on the S3 Bucket and Cloudfront

In a previous project predating the posts to my static website, I already set up a public S3 bucket with Cloudfront and the necessary Route53 settings. In a future version of this post, I intend to update this post with the steps neccessart to set up the public S3 bucket, Cloudfront, and Route53.

Install Hugo

brew install hugo
  • Verify Hugo installed with the following command
hugo version

Creating A Folder For The Portfolio Site

Open the cli and create a folder to store the portfolio site. In my case, I chose “Hugo-Portfolio”

mkdir Hugo-Portfolio

Create The New Hugo Site

In the folder created to store the site run the following command.

hugo new site Hugo-Portfolio 

Build The Portfolio Site With Hugo

Installed The re-Terminal Theme For Hugo

  • Installed the theme as a git submodule with the following command:
git submodule add -f https://github.com/mirus-ua/hugo-theme-re-terminal.git themes/re-terminal

Copy Contents To Hugo.toml File

From the theme’s site page https://themes.gohugo.io/themes/hugo-theme-re-terminal/#how-to-configure copy the configuration and open the Hugo.toml file with a text editor of choice. Delete all of the text inside and paste in the copied configuration. Save the changes.

Create The Folders for The Site Pages In Hugo

  1. CD into the content folder
  2. Create folders for About, Home, Projects, and Scripts folders using the mkdir command.
  3. CD back into the project directory.

Configure The Site Pages

Use text editor of choice to edit the hugo.toml file in site’s directory. At the bottom of the file is text that looks like the below:

      [languages.en.menu]
        [[languages.en.menu.main]]
          identifier = "about"
          name = "About"
          url = "/about"

Add in the sections needed. I did the following:

      [languages.en.menu]
        [[languages.en.menu.main]]
          identifier = "about"
          name = "About"
          url = "/about"
        [[languages.en.menu.main]]
          identifier = "projects"
          name = "Projects"
          url = "/projects"
        [[languages.en.menu.main]]
          identifier = "posts"
          name = "Posts"
          url = "/posts"
        [[languages.en.menu.main]]
          identifier = "scripts"
          name = "Scripts"
          url = "/scripts"

In the block above the section for the site’s pages. I changed the block to the following:

[languages]
  [languages.en]
    languageName = "English"
    title = "Carl Kernek"

Near the top of the file, I changed the following:

  • showMenuItems to “4”
  • contentTypeName to “home”
  • themeColor to “blue”
  • paginate to “4” Write the changes to the file, save, and exit.

Create Rsync Script

I created an rsync script to make it easier to copy the md files from Obsidian to Hugo.

Note: I did not include my exact file paths for security reasons.

#!/bin/zsh
echo "About to perform the rsync"
rsync -av --delete "/Source/Folder1" "Destination/Folder"
rsync -av --delete "/Source/Folder2" "Destination/Folder"
rsync -av --delete "/Source/Folder3" "Destination/Folder"
rsync -av --delete "/Source/Folder4" "Destination/Folder"
rsync -av --delete "/Source/Folder5" "Destination/Folder"
echo "rsync complete"
exit 0

I saved the file in the Hugo-Portfolio folder as rsync-notes.sh

Make Changes to Network Chuck’s Images.py Script

Copied Network Chuck’s python script from https://blog.networkchuck.com/posts/my-insane-blog-pipeline/#maclinux-1

Changed the script to the following. The main change is placing the script into a function so that it can be easily reused for the various folders I wish to have content in for the portfolio site.

import os

import re

import shutil

  

# Paths

posts_dir = 'path to posts dir in Hugo folder'

projects_dir = 'path to projects dir in Hugo folder'

home_dir = 'path to home dir in Hugo folder'

scripts_dir = 'path to scripts dir in Hugo folder'

about_dir = 'path to about scripts dir in Hugo folder'

attachments_dir = 'path to attachments dir in Obsidian'

static_images_dir = 'path to static images dir in Hugo folder'

  

def images(posts_dir, attachments_dir, static_images_dir):

# Step 1: Process each markdown file in the posts directory

for filename in os.listdir(posts_dir):

if filename.endswith(".md"):

filepath = os.path.join(posts_dir, filename)

  

with open(filepath, "r") as file:

content = file.read()

  

# Step 2: Find all image links in the format ![Image Description](/images/Pasted%20image%20...%20.png)

images = re.findall(r'\[\[([^]]*\.png)\]\]', content)

  

# Step 3: Replace image links and ensure URLs are correctly formatted

for image in images:

# Prepare the Markdown-compatible link with %20 replacing spaces

markdown_image = f"![Image Description](/images/{image.replace(' ', '%20')})"

content = content.replace(f"[[{image}]]", markdown_image)

  

# Step 4: Copy the image to the Hugo static/images directory if it exists

image_source = os.path.join(attachments_dir, image)

if os.path.exists(image_source):

shutil.copy(image_source, static_images_dir)

  

# Step 5: Write the updated content back to the markdown file

with open(filepath, "w") as file:

file.write(content)

  

print("Markdown files processed and images copied successfully.")

  

images(posts_dir, attachments_dir, static_images_dir)

images(projects_dir, attachments_dir, static_images_dir)

images(home_dir, attachments_dir, static_images_dir)

images(scripts_dir, attachments_dir, static_images_dir)

images(about_dir, attachments_dir, static_images_dir)

print("Completed the processing of all Markdown files across all folders and images copied successfully")

Create the requisite folders and files for the GitHub Actions to sync the code to S3

  1. In the project folder run mkdir .github
  2. CD into the .github directory and run mkdir workflows
  3. CD into the workflows directory.
  4. Copy the below code
name: Upload Website

on:
  push:
    branches:
    - master
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --acl public-read --follow-symlinks --delete
      env:
        AWS_S3_Bucket: ${{ secrets.AWS_S3_Bucket }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-east-1'
        SOURCE_DIR: 'public'
  1. Run nano deployToS3.yml
  2. Paste the code into the file.
  3. Write the changes
  4. Save the file
  5. Exit the file

Keep The Code In A Git Repository

Visit github and create a new repo. I kept mine as private.

I already have my ssh keys set up. This would be the time to set up ssh keys with GitHub if it is not set up.

In the folder containing the Hugo project, run the following commands:

git remote add origin git@github.com:<GitHubUsername>/<HugoSiteRepoName>.git
git add .
git commit -m "<Insert Message Here>"
git push -u origin <Branch Name>

Create The CI/CD Pipeline From GitHub To The S3 Bucket

Creating The GitHub Action

In GitHub create a GitHub Action by:

  1. Going to the repo containing the Hugo Site.
  2. Navigate to the deployToS3.yml file
  3. Click Actions in the menu bar
  4. Click the edit button.
  5. Search the marketplace for ‘s3 sync’
  6. Click on the ‘S3 Sync’ option by jakejarvis

Creating The IAM User In AWS

  1. In AWS go to the IAM console
  2. Create a new user. In my case I called the user, “github-user”. I left the “Provide user access to the AWS Management Console” unchecked.
  3. Choose attach policies directly to user
  4. Select the “AmazonS3FullAccess” permissions and ensure all other permissions are unselected.
  5. Follow the prompts and “Create User”
  6. Go to the newly created user in IAM
  7. Click the “Security Credentials” tab
  8. Scroll down to the “Access keys” and click “Create access key”
  9. Click “Other”
  10. Click “Next”
  11. Enter in “github access” for the “Description tag value”
  12. Click “Create Access Key”
  13. There will be two values. The value for the “Access key” will be used for the AWS_ACCESS_KEY_ID and the “Secret access key” will be for the AWS_SECRET_ACCESS_KEY.
  14. Save both values to a secure location as the secret access key will not be retrievable later.

Providing The Secrets

  1. Click Settings
  2. Click secrets and variables
  3. Click New repsitory secret
  4. Create a secret for AWS_S3_Bucket, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY and provide the corresponding values in the text field while creating.
  5. Save and repeat step 4 for the next secret. For AWS_S3_Bucket provide the name of the bucket which can be obtained from the S3 bucket in AWS.

Creating the S3 Bucket Policy

  1. Go to the S3 console in AWS
  2. Click on the bucket being used for the website.
  3. Click the “Permissions” tab
  4. Click on “Bucket Policy”
  5. Click “Edit”
  6. Click on “Policy Generator”
  7. On “Select Type of Policy” select “S3 Bucket Policy”
  8. Ensure “Effect” is set to “Allow”
  9. In the “Principle” field enter the ARN of the user created for github access.
  10. For “Actions” select “All Actions”. Note: It is ideal to come back to this setting later and narrow down the actions to only the bare minimum needed to perform the actions for security. It is my intention to update these security settings as I continue to work on the project.
  11. In the “Amazon Resource Name (ARN)” field enter the arn of the S3 bucket.
  12. Click “Add Statement”
  13. Click “Generate Policy”
  14. Copy all of the text that comes up in the pop up window.
  15. Paste the text into the “Bucket Policy” of the S3 bucket.
  16. Click “Save Changes”

The CI/CD between GitHub and the S3 bucket should now be complete. Next is to test the CI/CD pipeline between GitHub and the S3 bucket. Some tinkering with the S3 Bucket policies and the GitHub Actions may be necessary to get everything working. However, the next section will go over the areas to watch to ensure the CI/CD pipeline is working correctly.

Testing The CI/CD Pipeline

Updating The Site

  1. Remove all files and folders from the public by cd into the directory and running rm -rf *
  2. Confirm the deletion
  3. cd into the main project directory.
  4. run the script to rsync the notes zsh rsync-notes.sh
  5. Run the python script to update the images python3 images.py
  6. Create the static site with Hugo by running Hugo --gc
  7. Add the files to git with git add .
  8. Commit the changes with git commit -m "Insert message here"
  9. Push the changes to the GitHub repo with git push origin master
  10. Wait for the GitHub Actions to complete.
  11. Reload the site
  12. If the site hasn’t changed then invalidate the Cloudfront.