For my use case, I needed an object storage solution similar to S3 or Google Cloud Storage. I found an open-source alternative, minio, which fit my needs perfectly.
I tested MinIO using their publicly hosted version. You can access it from here.
For credentials and login details, Refer Here
Next, I integrated MinIO with my Go application using the MinIO Go SDK. Since I wanted to upload multiple files simultaneously, I used goroutines and channels for concurrent uploads.
The full source code is here, but I'll explain a few key snippets.
Go initial setup
Create a simple go project. (If you haven't installed Go yet, refer here)
To begin, I wrote a simple API using the Gin framework to handle file uploads:
This API accepts a POST request with files (e.g., images) in the payload.
The actual file handling occurs in the service layer, which:
Connects to MinIO
Retrieves the files from the multipart request
Iterates through each file
Fires off a goroutine to upload each file, using a channel to track the status
Listens to the channel for file upload statuses
Returns the status and closes the request
Each goroutine PutImageInBucket() abstracts the MinIO SDK's PutObject function and uses a channel to return the status:
After uploading, I generate pre-signed URLs to allow clients to access the uploaded files. Pre-signed URLs require you to specify the validity period (from 1 second to 7 days).
But Why??
By using goroutines, the application can upload multiple files concurrently. Channels help gather the upload statuses from each goroutine without encountering Race condition.