diff --git a/README.md b/README.md
index de3d6e694..e30f5be33 100644
--- a/README.md
+++ b/README.md
@@ -66,7 +66,7 @@ The goal is to have a single service that you can run and it works out of the bo
### Prerequisites
-* **A computer that's on the public internet to run it on.** While crunching through video and serving it to viewers can be intensive from the computing side, you can get away with pretty meager resources. If you don't already have a server to run it on you can get a [Linode](https://www.linode.com/products/nanodes/) instance for $5/mo that runs it fine. If you worry that you'll be maxing out the bandwidth or transfer limits allotted to you, then utilize [Amazon S3](https://aws.amazon.com/s3/) very cheaply (or even free for a certain amount) to serve the files instead.
+* **A computer that's on the public internet to run it on.** While crunching through video and serving it to viewers can be intensive from the computing side, you can get away with pretty meager resources. If you don't already have a server to run it on you can get a [Linode](https://www.linode.com/products/nanodes/) instance for $5/mo that runs it fine. If you worry that you'll be maxing out the bandwidth or transfer limits allotted to you, then utilize [S3 Storage](https://github.com/gabek/owncast/blob/master/doc/S3.md) very cheaply (or even free for a certain amount) to serve the files instead.
* [ffmpeg](https://ffmpeg.org/) is required to function. [Install it](https://ffmpeg.org/download.html) first.
* These instructions are assuming you're using [OBS](https://obsproject.com/) on your personal computer to stream from. It's not required, but it's a great free piece of software.
@@ -99,16 +99,18 @@ The goal is to have a single service that you can run and it works out of the bo
Three ways of storing and distributing the video are supported.
1. [Locally](#local-file-distribution) via the built-in web server.
-2. [Amazon S3](#amazon-s3).
+2. [S3-compatible storage](#s3-compatible-storage).
3. Experimental [IPFS](#ipfs) support.
### Local file distribution
This is the simplest and works out of the box. In this scenario video will be served to the public from the computer that is running the server. If you have a fast internet connection, enough bandwidth alotted to you, and a small audience this may be fine for many people.
-### Amazon S3
+### S3-Compatible Storage
-Enable S3 support in `config.yaml` and add your AWS access credentials. Files will be distributed from a S3 bucket that you have created for this purpose. This is a good option for almost any case since S3 is cheap and you don't have to worry about your own bandwdith.
+Enable S3 support in `config.yaml` and add your access credentials. Files will be distributed from a S3 bucket that you have created for this purpose. This is a good option for almost any case since S3 is cheap and you don't have to worry about your own bandwidth.
+
+Please read the [more detailed documentation about configuration of S3-Compatible Services](https://github.com/gabek/owncast/blob/master/doc/S3.md).
### IPFS
@@ -124,9 +126,10 @@ By editing the config file you can change what IPFS gateway server is used, and
Here's a list of some things you can do to increase performance and make things nicer for yourself.
+* Get a faster server with more cores so you can [enable more bitrates at once](https://github.com/gabek/owncast/blob/master/doc/configuration.md).
* Put a CDN in front of your server if you serve your files locally. You can even get a free one like [Cloudflare](https://www.cloudflare.com/). Then as more people view your stream people will no longer be downloading the stream directly from your server, but from the CDN instead, and it'll be faster. This is also a good way to enable SSL for your site.
-* If you use Amazon S3 for storage, have it [expire files from your bucket after N days](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html) because old files sitting on your S3 bucket aren't useful to anybody.
+* If you use S3 for storage, have it [expire files from your bucket after N days](https://github.com/gabek/owncast/blob/master/doc/S3.md) because old files sitting on your S3 bucket aren't useful to anybody.
* Edit the `webroot/index.html` file and make it look however you want.
@@ -148,14 +151,12 @@ The following is a list of things, as long as there's some traction, I'd like to
* Real web layout and chat UI is being worked on by [gingervitis](https://github.com/gingervitis).
-* Utilizing non-Amazon owned, but still S3 compatible storage. There's so many services out there that are S3 compatible such as [Linode Object Storage](https://www.linode.com/products/object-storage/), [Wasabi](https://wasabi.com/what-is-wasabi/), [Backblaze](https://www.backblaze.com/b2/cloud-storage-pricing.html), [Google Storage](https://cloud.google.com/storage/), [DreamHost DreamObjects](https://www.dreamhost.com/cloud/storage/), or you can [even run your own](https://min.io/). So it's good to have options.
+* Document more non-Amazon owned, but still S3 compatible storage. There's so many services out there that are S3 compatible such as [Backblaze](https://www.backblaze.com/b2/cloud-storage-pricing.html), [Google Storage](https://cloud.google.com/storage/), [DreamHost DreamObjects](https://www.dreamhost.com/cloud/storage/), or you can [even run your own](https://min.io/). So it's good to have options.
* Refactor chat so it's more controlled by the server and doesn't accept just anything from clients and relay it back to everyone.
* Add more functionality to chat UI such as moderation (deleting messages), emojis/gif search, etc. You know, the stuff other services have and people are used to.
-* HLS adaptive bitrates. Right now there's a single bitrate being generated. We should be able to enable an array of bitrates in the config and spit out a HLS master playlist pointing to all of them.
-
* Collect viewer stats so you know how many people tuned into a stream. People seem to care about that kind of thing.
* Add a simple setup wizard that will generate the config file for you on the first run by asking simple questions.
diff --git a/doc/S3.md b/doc/S3.md
index 8f6bb36c3..1b3c31699 100644
--- a/doc/S3.md
+++ b/doc/S3.md
@@ -1,6 +1,101 @@
-Here are some details and tips specific to using S3 for storage.
+Here are some setup details, general information and tips specific to using external storage.
-## File expiration
+Choose your storage provider of choice. Yours not listed? [File an issue](https://github.com/gabek/owncast/issues) and we'll test and write up some documentation for it.
+
+* [Linode Object Storage](#linode-object-storage)
+* [AWS S3](#aws-s3)
+* [Wasabi](#wasabi-cloud-storage)
+
+## [Linode Object Storage](https://www.linode.com/pricing/?r=588ad4bf08ce8394e8eb11f0a463fde64637af9d/#row--storage)
+
+250 GB storage + 1 TB Outbound Transfer for $5/mo.
+
+Linode Object Storage is a good choice if you're already using Linode to host your server. It should be fast to transfer your video from your server to their storage service, and their pricing will probably just be the flat $5/mo for you, so it's easy to know what you're paying.
+
+
+
+* Create a new bucket at the [Linode Object Storage](https://cloud.linode.com/object-storage/buckets) admin page.
+* Edit your config file and change the S3 `endpoint` to match the hostname listed below your newly created bucket that looks something like `myvideo.us-east-1.linodeobjects.com`, the bucket name to match the one you just created and the S3 region to match the `us-east-1` equivalent of the above hostname.
+* Using the [Linode Object Access Keys](https://cloud.linode.com/object-storage/access-keys) page create a new Access Key and add the Key and Secret to your `config.yaml` file.
+
+In the following steps Linode requires you to interact with your bucket using the s3cmd tool. So install that on your terminal and configure it.
+
+Run `s3cmd --configure` and fill in the values with what is currently in your config file. It should look similar to this:
+```
+Access Key: ABC12334
+Secret Key: fj3kd83jdkh
+Default Region: US
+S3 Endpoint: us-east-1.linodeobjects.com
+DNS-style bucket+hostname:port template for accessing a bucket: us-east-1.linodeobjects.com
+Use HTTPS protocol: False
+```
+
+### Add permissions to access video.
+
+_This part sucks_. But you only have to do it once per bucket. [These are the full instructions](https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage-acls-and-bucket-policies/#bucket-policies) but let me summarize.
+
+
+1. Create a file called bucket_policy.json that has the following:
+```
+{
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": [
+ "*"
+ ]
+ },
+ "Action": [
+ "s3:GetObject"
+ ],
+ "Resource": [
+ "arn:aws:s3::MYBUCKETNAME/*"
+ ]
+ }
+ ]
+}
+```
+
+1. Replace `MYBUCKETNAME` with your actual bucket name.
+1. Run `s3cmd setpolicy bucket_policy.json s3://MYBUCKETNAME` replacing `MYBUCKETNAME` with your bucket name.
+1. Run `s3cmd info s3://MYBUCKETNAME` to make sure the new policy saved.
+
+Now files video saved to Linode Object Storage will be readable.
+
+More details about how to get started using Linode Object Storage can be found [on their documentation](https://www.linode.com/docs/platform/object-storage/how-to-use-object-storage/).
+
+
+### File expiration
+
+Make files older than one day expire and delete themselves so you don't pay for storage of old video.
+
+Full details are in [their documentation](https://www.linode.com/docs/platform/object-storage/how-to-manage-objects-with-lifecycle-policies/).
+
+Create a file called `lifecycle_policy.json` with the following contents:
+
+```
+
+
+ delete-all-objects
+
+ Enabled
+
+ 1
+
+
+
+```
+
+* Run `s3cmd setlifecycle lifecycle_policy.xml s3://MYBUCKETNAME`.
+* Run `s3cmd info s3://MYBUCKETNAME` and you should now see ` Expiration Rule: all objects in this bucket will expire in '1' day(s) after creation`.
+
+
+## AWS S3
+
+AWS S3 is a good choice if you're already using AWS for your server or are comfortable using AWS for other things. If you're brand new to object storage and not using AWS already I'm not sure I'd recommend jumping into it just for Owncast. There are other options.
+
+### File expiration
You should expire old segments on your S3 bucket. [Here are some instructions on how to do that.](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html)
@@ -8,6 +103,32 @@ You should expire old segments on your S3 bucket. [Here are some instructions o
* A one day object expiration lifecycle rule on objects is as low as you can go, so use that.
* Because AWS [rounds the expiration to midnight of the next day](https://aws.amazon.com/premiumsupport/knowledge-center/s3-lifecycle-rule-delay/) you may have a lot of old video chunks sitting around. You can make the most of this by increasing the `maxNumberInPlaylist` value in your config file to something much higher, allowing users to rewind your stream back in time further. If the video is available then you might as well make it available to your users.
-## CORS
+### CORS
-* Ugh. CORS. [You will need to enable CORS on your bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html#how-do-i-enable-cors) so the web player can access the video.
\ No newline at end of file
+* Ugh. CORS. [You will need to enable CORS on your bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html#how-do-i-enable-cors) so the web player can access the video.
+
+
+## [Wasabi cloud storage](https://wasabi.com/content-delivery/)
+
+Most people would end up paying $5.99/mo with Wasabi, and there's no additional costs for the amount of people or amount of times people access your video. So if you have a ton of viewers, this is probably a good option.
+
+### Create a user and access key.
+
+1. [Create a new user on Wasabi](https://wasabi.com/wp-content/themes/wasabi/docs/Getting_Started/index.html#t=topics%2FCreating_a_User.htm) for yourself.
+1. [Create a new Access Key](https://wasabi.com/wp-content/themes/wasabi/docs/Getting_Started/index.html#t=topics%2FAssigning_an_Access_Key.htm) in the Users Panel.
+1. Update your Owncast `config.json` file with the above Access Key and Secret as well as the other required details.
+
+Depending on the region where your bucket lives look up the service URL [from this page](https://wasabi-support.zendesk.com/hc/en-us/articles/360015106031-What-are-the-service-URLs-for-Wasabi-s-different-regions-).
+
+* Endpoint: Is the "service URL" you looked up above. Likely ` s3.wasabisys.com` or similar.
+* Bucket
+
+### Making files public
+
+Wasabi makes it easy to make a bucket public. [Full documentation is here](https://wasabi.com/wp-content/themes/wasabi/docs/Getting_Started/index.html#t=topics%2FMaking_Folders_and_or_Files_Public.htm), but simply select the folder and choose "_Make Public_".
+
+### Expiration of old files on Wasabi
+
+**Important note!** Wasabi does **NOT* seem to have a way to set a policy for deleting old files like AWS and Linode does. You may have your own way of cleaning up old files, or some other solution. But it's something to keep in mind in case you really start to build up a lot of old video files.
+
+If anybody knows how to enable Lifecycle Policies on Wasabi, please [file an issue with details](https://github.com/gabek/owncast/issues).
\ No newline at end of file
diff --git a/doc/linodebucket.png b/doc/linodebucket.png
new file mode 100644
index 000000000..d9532790d
Binary files /dev/null and b/doc/linodebucket.png differ