Compare commits

..

31 Commits
main ... main

Author SHA1 Message Date
hook-lord
f9530fd7d2 better error handling
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 28s
2024-12-13 15:01:00 +01:00
hook-lord
fd73c4d5d9 updated excluded titles
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 28s
2024-12-09 09:56:26 +01:00
hook-lord
bbb1c3f234 json path
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 27s
2024-12-06 23:28:03 +01:00
hook-lord
8a5e665f77 updated cache path
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 28s
2024-12-06 22:13:38 +01:00
hook-lord
7060e66713 hmmmm....
All checks were successful
Build and Push Docker Image / build-and-push (push) Successful in 14s
2024-12-06 19:42:16 +01:00
hook-lord
6b12914c1f image name
Some checks failed
Build and Push Docker Image / build-and-push (push) Has been cancelled
2024-12-06 19:41:25 +01:00
hook-lord
44b86ea7ec or detail oriented
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 1m0s
2024-12-06 19:39:32 +01:00
hook-lord
f1efdb879c i am not a smart man
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 7s
2024-12-06 19:38:43 +01:00
hook-lord
6420086124 syntax
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 6s
2024-12-06 19:37:48 +01:00
hook-lord
fc17076d3f login with token
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 1m8s
2024-12-06 19:35:58 +01:00
hook-lord
d6aab7675e registry
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 6s
2024-12-06 19:31:27 +01:00
hook-lord
7c6955c2cc named build stage
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 36s
2024-12-06 19:30:08 +01:00
hook-lord
777c9052ed more dockerifle
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 6s
2024-12-06 18:05:03 +01:00
hook-lord
b299e19571 updated dockerfile
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 7s
2024-12-06 18:02:05 +01:00
hook-lord
709464b5f7 added dockerfile and workflow
Some checks failed
Build and Push Docker Image / build-and-push (push) Failing after 33s
2024-12-06 18:00:11 +01:00
9b0941a04a Update readme.md 2024-07-30 13:12:34 +00:00
ec140cbbc7 Update readme.md 2024-07-30 13:12:13 +00:00
f7fcb41a87 Update readme.md 2024-07-30 13:10:54 +00:00
cd675c0d6a moved cache to details collector 2024-06-19 20:26:36 +02:00
333739450f typo 2024-06-11 12:44:02 +02:00
8c9f6e2dee finished implementation of itjobbank 2024-06-11 12:08:58 +02:00
32f83e358b added scraper for it-jobbank 2024-06-11 11:38:05 +02:00
979ed97738 added firstSeen to job struct 2024-06-11 08:49:46 +02:00
8abff30b52 added run script 2024-06-10 11:53:27 +02:00
fd9b4b515c added exclude keywords 2024-06-10 00:39:04 +02:00
1d25f4e112 updated description to grab raw html 2024-06-09 12:40:42 +02:00
07bb549d44 Merge pull request 'core/http-server' (#1) from core/http-server into main
Reviewed-on: rannes.dev/sw-jobs-go#1
2024-06-08 22:10:35 +00:00
c0ec6dc003 moved scraper logic out of main 2024-06-08 22:12:22 +02:00
994ee9c732 renamed to main 2024-06-08 21:51:45 +02:00
693d654764 added tailwind 2024-06-08 21:12:04 +02:00
38023c1aa5 fixed issue where it would write the jobs twice 2024-06-08 21:03:19 +02:00
9 changed files with 274 additions and 97 deletions

View File

@ -0,0 +1,31 @@
name: Build and Push Docker Image
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Log in to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ secrets.REGISTRY }}
username: ${{ secrets.USER }}
password: ${{ secrets.TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ secrets.REGISTRY }}/rannes.dev/sw-jobs-scraper:latest

5
.gitignore vendored
View File

@ -1 +1,4 @@
/lambda-package
/thehub_cache
/thehub.json
/itjobbank_cache
/it-jobbank.json

11
Dockerfile Normal file
View File

@ -0,0 +1,11 @@
FROM golang:1.23 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY *.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -o job-scraper
FROM alpine:3.18
WORKDIR /app
COPY --from=builder /app/job-scraper .
CMD ["./job-scraper"]

View File

@ -1,31 +0,0 @@
#!/bin/bash
# Set variables
PACKAGE_DIR="./lambda-package"
BUILD_FILE="bootstrap"
ZIP_FILE="lambda-deployment.zip"
SOURCE_FILE="main.go"
# Delete the content of the lambda-package directory
rm -rf $PACKAGE_DIR/*
echo "Deleted the content of $PACKAGE_DIR"
# Set environment variables and build the Go project
GOOS=linux GOARCH=arm64 go build -o $BUILD_FILE -tags lambda.norpc $SOURCE_FILE
echo "Built the Go project with GOOS=linux and GOARCH=arm64"
# Move the build file to the lambda-package directory
mv $BUILD_FILE $PACKAGE_DIR/
echo "Moved the build file to $PACKAGE_DIR"
# Change directory to lambda-package
cd $PACKAGE_DIR
# Zip the contents of lambda-package into lambda-deployment.zip
zip -r $ZIP_FILE *
echo "Zipped the contents of $PACKAGE_DIR into $ZIP_FILE"
# Return to the original directory
cd -
echo "Script completed successfully"

1
go.mod
View File

@ -8,7 +8,6 @@ require (
github.com/antchfx/htmlquery v1.3.1 // indirect
github.com/antchfx/xmlquery v1.4.0 // indirect
github.com/antchfx/xpath v1.3.0 // indirect
github.com/aws/aws-lambda-go v1.47.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gocolly/colly v1.2.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect

2
go.sum
View File

@ -8,8 +8,6 @@ github.com/antchfx/xmlquery v1.4.0 h1:xg2HkfcRK2TeTbdb0m1jxCYnvsPaGY/oeZWTGqX/0h
github.com/antchfx/xmlquery v1.4.0/go.mod h1:Ax2aeaeDjfIw3CwXKDQ0GkwZ6QlxoChlIBP+mGnDFjI=
github.com/antchfx/xpath v1.3.0 h1:nTMlzGAK3IJ0bPpME2urTuFL76o4A96iYvoKFHRXJgc=
github.com/antchfx/xpath v1.3.0/go.mod h1:i54GszH55fYfBmoZXapTHN8T8tkcHfRgLyVwwqzXNcs=
github.com/aws/aws-lambda-go v1.47.0 h1:0H8s0vumYx/YKs4sE7YM0ktwL2eWse+kfopsRI1sXVI=
github.com/aws/aws-lambda-go v1.47.0/go.mod h1:dpMpZgvWx5vuQJfBt0zqBha60q7Dd7RfgJv23DymV8A=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=

268
main.go
View File

@ -1,12 +1,13 @@
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"strings"
"time"
"github.com/aws/aws-lambda-go/lambda"
"github.com/gocolly/colly"
)
@ -19,6 +20,8 @@ type job struct {
Description string `json:"description"`
Link string `json:"link"`
Skills skills `json:"skills"`
Scraped string `json:"scraped"`
Source string `json:"source"`
}
type skills struct {
@ -28,53 +31,85 @@ type skills struct {
Svelte bool `json:"svelte"`
Nextjs bool `json:"nextjs"`
Typescript bool `json:"typescript"`
Tailwind bool `json:"tailwind"`
}
var (
jobs []job
lastFetch time.Time
cacheTTL = time.Minute * 5
jobLimit = 20
)
// Utility functions
// Checks if a string contains any of the given keywords
func skillChecker(description string) skills {
return skills{
React: strings.Contains(description, "React"),
Python: strings.Contains(description, "Python"),
Golang: strings.Contains(description, "Go"),
Golang: strings.Contains(description, "Golang"),
Svelte: strings.Contains(description, "Svelte"),
Nextjs: strings.Contains(description, "Next.js"),
Typescript: strings.Contains(description, "TypeScript"),
Tailwind: strings.Contains(description, "Tailwind"),
}
}
func fetchData() error {
// Converts job struct to json
func jobsToJson(file *os.File, jobs []job, fName string) {
// Encode jobs slice to JSON
encoder := json.NewEncoder(file)
encoder.SetIndent("", " ") // Pretty-print with indentation
if err := encoder.Encode(jobs); err != nil {
log.Fatalf("Cannot write to file %q: %s", fName, err)
}
baseUrl := "https://thehub.io"
// Instantiate default collector
c := colly.NewCollector(
// visit only the hub
colly.AllowedDomains("www.thehub.io", "thehub.io"),
fmt.Println("Job details successfully written to", fName)
}
// Cache responses to prevent multiple requests
colly.CacheDir("./tmp"),
)
// Slice of excluded words in the job titles
excluded := []string{"senior", "lead"}
// Instantiate a new collector to visit the job details page
detailsCollector := c.Clone()
// Limit the number of jobs to fetch
jobCount := 0
// On every <div> element with class "card__content attribute call callback
c.OnHTML("div[class=card__content]", func(e *colly.HTMLElement) {
// Return if the job limit has been reached
if jobCount >= jobLimit {
func checkIfPaid(description string) {
for _, keyword := range unpaidKeywords {
if strings.Contains(strings.ToLower(description), keyword) {
return
}
// Get the title and ensure it doesn't contain any excluded words
}
}
func checkIfStudent(description string) string {
for _, keyword := range studentKeywords {
if strings.Contains(strings.ToLower(description), keyword) {
return "student"
}
}
return "full time"
}
// Slice to store job details
var (
excluded = []string{"senior", "lead", "founder", "cto", "vp of", "erfaren", "arkitekt", "architect", "manager", "ulønnet", "unpaid", "praktik", "cyber", "leder", "sikkerhed", "supporter", "sr."}
unpaidKeywords = []string{"unpaid", "praktik", "ulønnet"}
studentKeywords = []string{"studerende", "studenter", "student", "medhjælper"}
)
func scrapeHub() {
var (
jobs []job
jobCount int
fName = "/app/data/thehub.json"
maxJobs = 20
baseUrl = "https://thehub.io"
searchString = "https://thehub.io/jobs?roles=frontenddeveloper&roles=fullstackdeveloper&roles=backenddeveloper&roles=devops&paid=true&countryCode=DK&sorting=newJobs"
)
// Create file after scraping is complete
c := colly.NewCollector(
colly.AllowedDomains("www.thehub.io", "thehub.io"),
)
detailsCollector := colly.NewCollector(
colly.AllowedDomains("www.thehub.io", "thehub.io"),
colly.CacheDir("/app/data/thehub_cache"),
)
c.OnHTML("div[class=card__content]", func(e *colly.HTMLElement) {
if jobCount >= maxJobs {
return
}
title := e.ChildText("span.card-job-find-list__position")
for _, excludedWord := range excluded {
if strings.Contains(strings.ToLower(title), excludedWord) {
@ -91,66 +126,189 @@ func fetchData() error {
fmt.Println("Visiting", r.URL.String())
})
detailsCollector.OnHTML("div.view-job-details", func(e *colly.HTMLElement) {
if jobCount >= jobLimit {
detailsCollector.OnHTML("div[class='view-job-details']", func(e *colly.HTMLElement) {
if jobCount >= maxJobs {
return
}
// Get logo and trim the url
logo := e.ChildAttr("div.media-item__image", "style")
cutLeft := "background-image:url("
cutRight := ");"
trimmedLogo := strings.Trim(logo, cutLeft+cutRight)
// Get company name
descriptionHTML, err := e.DOM.Find("content.text-block__content > span").Html()
if err != nil {
log.Printf("Error getting HTML of description: %s", err)
return
}
jobDetails := job{
Title: e.ChildText("h2[class=view-job-details__title]"),
Logo: trimmedLogo,
Company: e.ChildText(".bullet-inline-list > a:first-child"),
Location: e.ChildText(".bullet-inline-list > a:nth-child(2)"),
Type: e.ChildText(".bullet-inline-list > a:nth-child(3)"),
Description: e.ChildText("content.text-block__content > span"),
Description: descriptionHTML,
Link: e.Request.URL.String(),
Skills: skillChecker(e.ChildText("content.text-block__content > span")),
Scraped: time.Now().String(),
Source: baseUrl,
}
jobs = append(jobs, jobDetails)
jobCount++
fmt.Printf("Scraped job %d from TheHub\n", jobCount)
})
// Handle pagination
c.OnHTML("a.page-link", func(e *colly.HTMLElement) {
if jobCount >= maxJobs {
return
}
nextPage := e.Attr("href")
if nextPage != "" {
fullNextPage := baseUrl + nextPage
fmt.Println("Visiting next page:", fullNextPage)
e.Request.Visit(fullNextPage)
}
})
// Visit the initial URL to start scraping
err := c.Visit("https://thehub.io/jobs?roles=frontenddeveloper&roles=fullstackdeveloper&roles=backenddeveloper&search=developer&paid=true&countryCode=DK&sorting=newJobs")
// Add error handling for the initial visit
err := c.Visit(searchString)
if err != nil {
return err
log.Printf("Error visiting TheHub: %s", err)
return
}
// Wait for all collectors to finish
c.Wait()
detailsCollector.Wait()
// Write jobs to file after scraping is complete
if len(jobs) > 0 {
file, err := os.Create(fName)
if err != nil {
log.Printf("Cannot create file %q: %s", fName, err)
return
}
defer file.Close()
jobsToJson(file, jobs, fName)
fmt.Printf("Successfully scraped %d jobs from TheHub\n", len(jobs))
} else {
log.Println("No jobs were scraped from TheHub")
}
return nil
}
func handler(ctx context.Context) ([]job, error) {
// Check if cache is valid
if time.Since(lastFetch) < cacheTTL && len(jobs) > 0 {
return jobs, nil
}
func scrapeItJobBank() {
var (
jobs []job
jobCount int
fName = "/app/data/it-jobbank.json"
maxJobs = 20
baseUrl = "https://www.it-jobbank.dk"
searchString = "https://www.it-jobbank.dk/jobsoegning/udvikling"
)
// Fetch new data
err := fetchData()
c := colly.NewCollector(
colly.AllowedDomains("www.it-jobbank.dk", "it-jobbank.dk"),
)
detailsCollector := colly.NewCollector(
colly.AllowedDomains("www.it-jobbank.dk", "it-jobbank.dk"),
colly.CacheDir("/app/data/itjobbank_cache"),
)
c.OnHTML("div[class=result]", func(e *colly.HTMLElement) {
if jobCount >= maxJobs {
return
}
title := e.ChildText("h3.job-title > a")
for _, excludedWord := range excluded {
if strings.Contains(strings.ToLower(title), excludedWord) {
return
}
}
fullLink := e.ChildAttr("h3.job-title > a", "href")
detailsCollector.Visit(fullLink)
})
detailsCollector.OnRequest(func(r *colly.Request) {
fmt.Println("Visiting", r.URL.String())
})
detailsCollector.OnHTML("section > div", func(e *colly.HTMLElement) {
if jobCount >= maxJobs {
return
}
descriptionHTML, err := e.DOM.Find("div[id=job_ad]").Html()
if err != nil {
log.Printf("Error getting HTML of description: %s", err)
return
}
checkIfPaid(descriptionHTML)
title := e.ChildText("h1.title")
if title == "" {
title = e.ChildText("h1[id=jobtitle]")
}
jobDetails := job{
Title: title,
Logo: baseUrl + e.ChildAttr("div.company-logo > img", "src"),
Company: e.ChildText("p.published"),
Location: e.ChildText("div.job-location > p.caption"),
Type: checkIfStudent(descriptionHTML),
Description: descriptionHTML,
Link: e.Request.URL.String(),
Skills: skillChecker(descriptionHTML),
Scraped: time.Now().String(),
Source: baseUrl,
}
jobs = append(jobs, jobDetails)
jobCount++
fmt.Printf("Scraped job %d from IT JobBank\n", jobCount)
})
c.OnHTML("a.page-link", func(e *colly.HTMLElement) {
if jobCount >= maxJobs {
return
}
nextPage := e.Attr("href")
if nextPage != "" {
e.Request.Visit(nextPage)
}
})
// Add error handling for the initial visit
err := c.Visit(searchString)
if err != nil {
return nil, err
log.Printf("Error visiting IT JobBank: %s", err)
return
}
// Update cache timestamp
lastFetch = time.Now()
// Wait for all collectors to finish
c.Wait()
detailsCollector.Wait()
return jobs, nil
// Write jobs to file after scraping is complete
if len(jobs) > 0 {
file, err := os.Create(fName)
if err != nil {
log.Printf("Cannot create file %q: %s", fName, err)
return
}
defer file.Close()
jobsToJson(file, jobs, fName)
fmt.Printf("Successfully scraped %d jobs from IT JobBank\n", len(jobs))
} else {
log.Println("No jobs were scraped from IT JobBank")
}
}
func main() {
lambda.Start(handler)
scrapeHub()
scrapeItJobBank()
}

View File

@ -1,13 +1,18 @@
# The Hub Scraper
# IT jobs scraper
deprecated as lambda was a bad solution for this, without setting up dynamodb, api etc. This will go live in a ec2 so it can write to local storage instead of running on demand.
This is a simple scraper that extracts job details from the [The Hub](https://thehub.io) website and itjobbank.
Go is fast but free tier lambda is not and I am not yet a smart man.
<del>This is a simple scraper that extracts job details from the [The Hub](https://thehub.io) website.</del>
## Filtering
<del>It's a fork of the original [The Hub Scraper](https://gitea.rannes.dev/rannes.dev/sw-jobs-go) by [Rannes](https://gitea.rannes.dev/rannes.dev).</del>
The scraper filters out a list of keywords like senior, architect etc. as I wrote it for entry and mid level roles. It also filters out unpaid form the hub, and keyword based from itjobbank.
<del>## Usage</del>
## Usage
<del>To run the scraper zip it deploy it to AWS Lambda and then call the function. </del>
To run the scraper, simply execute the following command:
```bash
go run scraper.go
```
The scraper will create a `thehub.json` and `itjobbank.json` file in the current directory, which contains a list of job details in JSON format. It caches the pages, so very light on resources and requests.

3
run-scrapers.sh Normal file
View File

@ -0,0 +1,3 @@
#!/bin/bash
cd /home/admin/sw-jobs-go
go run go-scraper >> scraper.log 2>&1