AWS Tip

Best AWS, DevOps, Serverless, and more from top Medium writers .

Follow publication

SCORM File Uploads with AWS S3

--

I had the opportunity to experiment with cloud-based file management by integrating Amazon Simple Storage Service (Amazon S3) to upload and store SCORM files. Through this process, I gained hands-on experience and uncovered valuable insights into leveraging S3’s powerful capabilities.

SCORM stands for Sharable Content Object Reference Model and is a set of standards for eLearning software that defines how content is shared within Learning Management System platforms. A SCORM file is typically a zip package containing learning materials such as quizzes, videos, and text, all formatted to meet SCORM standards and including details about user interactions.

Amazon Simple Storage Service or Amazon S3 is an object storage service product of Amazon Web Services (AWS) — a leading cloud provider offering scalable, cost-effective, and secure services to store and access data over the Internet.

In Amazon S3, files and their associated metadata are stored as objects in protected buckets (containers). Objects are assigned keys that serve as their unique identifier. Keys allow for hierarchical file organization in a flat storage system similar to directories in a file system. Metadata enables detailed tracking, advanced querying, and filtering within buckets.

In this article, I’ll break down how SCORM file handling and integration with Amazon S3 were abstracted into two classes: Package and S3Api classes, resulting in a modular, scalable, and robust solution.

Controller Responsibilities

To start, from within my SCORM files controller, I instantiate a new instance of the Package class which abstracts the S3 interactions, keeping my controller lean and focused on handling web requests while offloading file storage logic to the S3 service:

upload = Package.new(file).upload

Package Class

Package handles the process of uploading the SCORM file to a predefined S3 bucket.

class Package
BUCKET = "scorm-files" # bucket names can consist only of lowercase letters, numbers, dots, and hyphens.
ACL = "public-read"
CONTENT = "application/zip"

...

def upload
temp_file = Tempfile.new("tempfile")
temp_file.binmode

begin
temp_file.write(@file.read)
temp_file.rewind

object_key = "#{timestamp}_#{file_name}"
s3_api = S3Api.new(region: ENV.fetch("AWS_REGION", "us-west-2"), bucket_name: BUCKET, acl: ACL, content_type: CONTENT)
s3_api.upload_file(temp_file.path, object_key)
...
rescue => exception
...
ensure
temp_file.close
temp_file.unlink
end
end
end

Key Responsibilities of the upload Method include:

  1. Creating a temporary file for the upload.
  2. Ensuring the temp file is opened in binary mode (important for handling file uploads properly).
  3. Reading the content from the uploaded file and writing it to the temporary file.
  4. Constructing a unique key for S3. This ensures that each uploaded file has a unique identifier within the S3 bucket, preventing naming collisions and making it easier to locate specific files later. *Optionally, you can add tags to new and existing objects for data classification, access control, or analytics.
  5. Instantiating an object of the S3Api class and calling its upload_file method to handle the actual file upload to the specified S3 bucket.
  6. Ensuring the temp file is closed and deleted preventing resource leaks.

S3Api Class

S3Api abstracts the complexities of working with the Amazon S3 Api by using the AWS SDK to send file data to the specified S3 bucket.

class S3Api
def initialize(region:, bucket_name:, acl:, content_type:)
@client = Aws::S3::Client.new(region: region)
@resource = Aws::S3::Resource.new(client: @client)
@bucket_name = bucket_name
@acl = acl
@content_type = content_type
end

def upload_file(file, object_key)
bucket = @resource.bucket(@bucket_name)
s3_object = bucket.object(object_key)

begin
File.open(file, "rb") do |file_content|
s3_object.put({
body: file_content,
acl: @acl,
content_type: @content_type
})
end
s3_object
rescue
...
end
end
end

Key Responsibilities of the S3Api Class include:

  1. Initializing an S3 client and defining the geographical region for storage.
  2. Creating a resource object to interact with the specified bucket.
  3. Creating a S3 reference object for the file to be uploaded.
  4. Reading the file before uploading it with a specified ACL (access control list) and content type.
  5. Returning the S3 object from which we can derive a public_url after a successful upload. This URL allows external applications like LMSs to access the uploaded file.

This project allowed me to dive into new technologies like AWS while also enhancing my understanding of OOP. Here are some of my key takeaways:

  • Decoupling for Scalability and Maintainability: Separating management logic from the controller using modular classes like S3Api and Package significantly improves code scalability and maintainability. It makes a codebase easier to extend and adapt to future needs.
  • Furthermore, choosing instance methods over class methods can lead to cleaner, more flexible code. Instance methods allow for dependency injection and let classes maintain their internal state, which helps avoid the tight coupling that often comes with class methods.
  • The Power of AWS S3: Amazon S3 stands out as a reliable and scalable storage solution with many great features, including versioning, which allows developers to maintain multiple versions of an object in the same bucket, and intelligent-tiering, which automatically shifts objects between tiers based on usage, minimize storage cost.

I hope these insights encourage you to experiment with AWS and explore the possibilities it offers. Thanks for reading!

--

--

No responses yet