Devops

Linux server administration, docker, AWS, nginx, apache, sql ...

Accessing file systems for linux machines on the local network

Locate the other machines on your local network:

sudo arp-scan --localnet

result example:

Interface: wlp0s20f3, type: EN10MB, MAC: 04:33:c2:71:7e:42, IPv4: 192.168.0.106
Starting arp-scan 1.9.7 with 256 hosts (https://github.com/royhills/arp-scan)
192.168.0.1    cc:32:e5:54:10:13    TP-LINK TECHNOLOGIES CO.,LTD.
192.168.0.105    f0:f0:a4:15:28:59    (Unknown)
192.168.0.113    18:a6:f7:1d:98:59    TP-LINK TECHNOLOGIES CO.,LTD.
192.168.0.181    00:0e:08:eb:76:d5    Cisco-Linksys, LLC
192.168.0.157    c0:e7:bf:09:b8:bd    (Unknown)
192.168.0.172    fc:b4:67:55:a5:24    (Unknown)
192.168.0.192    24:dc:c3:a1:80:f0    (Unknown)
192.168.0.175    ac:41:6a:26:cd:1f    (Unknown)
192.168.0.173    e8:4c:4a:b4:cc:5c    (Unknown)
192.168.0.184    4e:03:73:ea:b3:b8    (Unknown: locally administered)
192.168.0.186    5c:61:99:7a:64:5d    (Unknown)
192.168.0.103    0c:9d:92:29:4a:a3    ASUSTek COMPUTER INC.
192.168.0.129    62:1e:f2:c4:cf:80    (Unknown: locally administered)

Hopefully these names will allow you to identify the machines. In my case the target machine was using a TP-Link wireless card so knowing the router is always 192.168.0.1, I was able to deduce that the target machine IP was 192.168.0.113

NOTE: in the example below I am using SSH so the host and target machine will both require SSH to be installed. Use these commands to install:

sudo apt-get install openssh-client
sudo apt-get install openssh-server

Now I am able to access the machine via SSH by using the command ssh <username>@192.168.0.113. Once connected I am prompted to enter the password for user <username>

To explore the remote file system with Nautilus, I can open my local Nautilus window and under + Other Locations add ssh://<username>@192.168.0.113. Once open it will prompt me for the password for user <username>.

 ssh_with_nautilus.png

For a more permanent fix, you can add the host to your local ~/.ssh/config file as such:

Host bobsmachine
HostName 192.168.0.113
User bobsyetuncle

Now you can go into nautilus and under + Other Locations enter ssh://bobsmachine

ssh_shortcut.png

When prompted to enter the username and password, selecting the "Remember forever" option will allow you to login to the remote macine in the future without having the re-enter the password.

save_forver.png

Cloudflare for local server


Use skiff for email.  Copy DNS setting from skiff to cloudflare.

See: https://www.youtube.com/watch?v=hrwoKO7LMzk
https://raidowl.hostyboi.com/2022/08/22/Cloudflare-Tunnel/

install 
wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && dpkg -i cloudflared-linux-amd64.deb

run a local host server

cd ~/www/nodeserver/
node hello.js


CLOUDFLARE

https://cyberhost.uk/cloudflare-argo-tunnel/#adding-more-services

https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/

1.) login to cloudflare dashboard

2.) create a new domain name

3.) open terminal and login via cmd

  cloudflared tunnel login

select the domain name.  a new pem file will be saved to local

3.) create the tunnel:

 cloudflared tunnel create mysite.com

if it exists delete it by checking with cloudflared tunnel list then cloudflared tunnel delete impressto.ca

 a new credential json file will be saved to your local

4.) create the config file  /home/webdev/.cloudflared/config.yml

url: http://localhost:5000
tunnel: 63f68dbe-585c-4c30-bdd9-980c39aa23e1
credentials-file: /home/annie/.cloudflared/63f68dbe-785c-4520-bdd9-980c39aa23e1.json

5.)
setup the dns:

cloudflared tunnel route dns mysite.com mysite.com

note you may need to delete the existing DNS CNAME record in the cloudflare website first

6.)
finally run it:

cloudflared tunnel run impressto.ca

change the local server address by editing ~/.cloudflared/config.yml


SERVING LOCAL SERVICES 


sudo crontab -e 

@reboot /home/annie/work/impressto/new_server/startup.sh

make sure to add local addresses to  /etc/hosts otherwise they will not be available for the tunnel on local

Certbox & NGINX on AWS

Did you know you can use CertBot and NGINX to have a wildcard certificate? Here’s how to do it with an AWS Ubuntu sever.


Prerequisites:

Overview:

The high level process to achieve our objective is as follows:

Disclaimer: As with any change, please make sure that you have created a Jira ticket, received proper approval, notified business partners, scheduled the action and taken the necessary actions to backup and recover should anything go wrong.


Installing CertBot:

SSH to the web server and run the following commands:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx

Install DNS Plugin:

SSH to the web server and run the following command:

sudo apt-get install python3-certbot-dns-route53

Create IAM Policy:

See also: https://certbot-dns-route53.readthedocs.io/en/stable/

Create new IAM policy using the AWS Route53 ZoneID of the hosted zone that you want to get an SSL Cert for.

{
    "Version": "2012-10-17",
    "Id": "certbot-dns-route53 sample policy",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ListHostedZones",
                "route53:GetChange"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect" : "Allow",
            "Action" : [
                "route53:ChangeResourceRecordSets"
            ],
            "Resource" : [
                "arn:aws:route53:::hostedzone/YOURHOSTEDZONEID"
            ]
        }
    ]
}

Create a new IAM Role:

Associate Role with EC2 Instance:

Run CertBot and get new Certs:

It’s important to get both the example.com and *.example.com as WILDCARD certs need to include the naked domain as well as any sub domains.

Note: Be sure to review/update example.com, *.example.com before running the below command. 

sudo certbot certonly --dns-route53 -d example.com -d *.example.com --dns-route53-propagation-seconds 30 -m domains@mysite.com --agree-tos

If the above command runs successfully, it will populate the necessary certificate key files into the  /etc/letsencrypt/live/example.com/ directory.


Update NGINX to use new SSL Certs:

The next step requires that you update the existing SSL configuration of the NGINX server to use the new LetsEncrypt certs.  There are a few common locations to check:

Between these locations, you should be able to locate the SSL configuration/settings  What your looking are the following two keys:

Below is a description of the newly downloaded LetsEncrypt keys

You need to update the following SSL entries to point to the new LetsEncrypt keys 


Test and restart NGINX:

Test that there are no errors in any of your NGINX files by running the following command

sudo nginx -t

If all of the tests come back as successful, you can go ahead and restart the nginx service

sudo service nginx restart

Validate SSL Cert:

Once restarted, open a browser window and visit your site.  You want validate that the website is using the new LetsEncrypt SSL cert and that the expiration is set 90 days out.  

Debug Docker Errors



Seriously have you tried just rebooting your machine?

For general container logs you can  use the standard docker logs command:

docker logs -f --until=120s laravel

SSLCertificateFile: file ‘/config_items/certs/impressto.pem’ does not exist or is empty

If the folder ~/Sites/impressto-docker/config/certs exists but is empty you will need to run this terminal command:

cd ~/Sites/impressto-docker
./createSSLCert.sh

Ngserve is not Running 

If you are unable to load the webapp on ngserve.impressto.localhost, it is likely caused by a missing dependency in the ~/Sites/impressto-webapp folder. Most likely it is a missing environment.local.ts file.

You can test ngserve by logging into the docker container with “impressto” and running the following:

cd /var/www/impressto-webapp;
ng serve ---configuration local --base-href /app/  --ssl true

Once you have fixed the issue you can run the “impressto” command again and wait a few minutes for ng-serve to rebuild the files. 

Cannot create container from docker-compose due to short volume name

You may have forgotten to edit the file ~/Sites/impressto-docker/.env.example.  Make changes as needed and save the file as .env. You can also run:

cd ~/Sites/impressto-docker;
./prepareDockerConfigs.sh;

Containers fail to load or shut down randomly on your machine but not others

If you see this happening it is likely RAM related. Either you are running out of memory of you have bad RAM. 

First try importing a docker container. If the imported container works you likely do not have hardware issues. If you are still having crashed containers after importing a container image, you need t start testing your system hardware. A common symptom of bad RAM is random computer crashes and intermittent freezing interfaces. If you need to reboot your machine several times a day, your hardware is probably baked.

Try memtester in Ubuntu 20

sudo apt-get install memtester
sudo memtester 1024 5

Another option is GTK Stress Tester but that will not find memory faults.

Composer running out of memory

Composer defaults to a maximum of 1.5G of memory usage. Sometimes this is not enough for a composer update. If you notice that builds are not completing correctly for this reason, a work-around is the following command:

COMPOSER_MEMORY_LIMIT=-1 composer update

Error: Cannot find module ‘nan’

This is  more of an angular issue on mac but you may run into it while setting up your local webapp. Fix is to go into the ~/Sites/impressto-webpp folder and enter this command:

npm i -g nan

Can’t connect to docker daemon. Is ‘docker -d’ running.

This error may also appear as:

ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

This error is most commonly seen if you have not added your user to the docker group. Rum the following commands:

sudo groupadd docker;

sudo usermod -aG docker $USER;

After that simply reboot your machine and the problem should go away. 

create .: volume name is too short, names should be at least two alphanumeric characters

Did you remember to rename the docker root folderfile .env.example to env ? 

Also this can happen if the formatting in the docker-compose.yml is not correct (bad indenting).

Cannot use port 80

If you have ngonx,  apache or skype installed on the host system that will block the use of port 80.  To determine what is running on port  80 use this command: 

sudo lsof -i tcp:80

This should display something like this 

sudo lsof -i tcp:80

COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME  
nginx   1858     root    6u  IPv4   5043      0t0  TCP ruir.mxxx.com:http (LISTEN)  
nginx   1867 www-data    6u  IPv4   5043      0t0  TCP ruir.mxxx.com:http (LISTEN)  
nginx   1868 www-data    6u  IPv4   5043      0t0  TCP ruir.mxxx.com:http (LISTEN)  
nginx   1869 www-data    6u  IPv4   5043      0t0  TCP ruir.mxxx.com:http (LISTEN)  
nginx   1871 www-data    6u  IPv4   5043      0t0  TCP ruir.mxxx.com:http (LISTEN)  

identify the PID of the process using port 80 and kill it using a command like this 

sudo lsof -t -i tcp:80 -s tcp:listen | sudo xargs kill

You can also permanently turn off apache on the host with:

sudo service apache2 stop;
sudo service mysql stop;

# also apache and mysqlfrom starting as a service on bootup
sudo systemctl disable apache2 mysql;

in some cases it is easiest to just completely remove apache2 from the host system

sudo apt-get --purge remove apache2;
sudo apt-get remove apache2-common;

NodeJS – FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed – JavaScript heap out of memory

This is not actually a docker error but may occur if you are runniing webpack builds inside docker (not recommended). If you are getting this error on our host system try the following command which is what we have used on the feature and builder servers:

 #increase node memory to 2gb
export NODE_OPTIONS=--max-old-space-size=2048

Performance issue with Mac:

Follow the official instructions  for installing Docker on Mac.  In a nutshell you will need to download Docker for Mac, and install it as you would any other Mac app. IMPORTANT: make sure you have the latest version of docker for Mac. Once installed you will need to allocate enough memory for docker to run the containers. Recommended size is 8GB. Not setting the memory limit may cause the elastic search container to exit with a 137 error code (docker container out of memory).  Linux does not require this config as it allocates memory directly from the host system.

Certbox & NGINX on AWS

Did you know you can use CertBot and NGINX to have a wildcard certificate? Here’s how to do it with an AWS Ubuntu sever.


Prerequisites:

Overview:

The high level process to achieve our objective is as follows:

Disclaimer: As with any change, please make sure that you have created a Jira ticket, received proper approval, notified business partners, scheduled the action and taken the necessary actions to backup and recover should anything go wrong.


Installing CertBot:

SSH to the web server and run the following commands:

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python-certbot-nginx

Install DNS Plugin:

SSH to the web server and run the following command:

sudo apt-get install python3-certbot-dns-route53

Create IAM Policy:

See also: https://certbot-dns-route53.readthedocs.io/en/stable/

Create new IAM policy using the AWS Route53 ZoneID of the hosted zone that you want to get an SSL Cert for.

{
    "Version": "2012-10-17",
    "Id": "certbot-dns-route53 sample policy",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ListHostedZones",
                "route53:GetChange"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect" : "Allow",
            "Action" : [
                "route53:ChangeResourceRecordSets"
            ],
            "Resource" : [
                "arn:aws:route53:::hostedzone/YOURHOSTEDZONEID"
            ]
        }
    ]
}

Create a new IAM Role:

Associate Role with EC2 Instance:

Run CertBot and get new Certs:

It’s important to get both the example.com and *.example.com as WILDCARD certs need to include the naked domain as well as any sub domains.

Note: Be sure to review/update example.com, *.example.com before running the below command. 

sudo certbot certonly --dns-route53 -d example.com -d *.example.com --dns-route53-propagation-seconds 30 -m domains@mysite.com --agree-tos

If the above command runs successfully, it will populate the necessary certificate key files into the  /etc/letsencrypt/live/example.com/ directory.


Update NGINX to use new SSL Certs:

The next step requires that you update the existing SSL configuration of the NGINX server to use the new LetsEncrypt certs.  There are a few common locations to check:

Between these locations, you should be able to locate the SSL configuration/settings  What your looking are the following two keys:

Below is a description of the newly downloaded LetsEncrypt keys

You need to update the following SSL entries to point to the new LetsEncrypt keys 


Test and restart NGINX:

Test that there are no errors in any of your NGINX files by running the following command

sudo nginx -t

If all of the tests come back as successful, you can go ahead and restart the nginx service

sudo service nginx restart

Validate SSL Cert:

Once restarted, open a browser window and visit your site.  You want validate that the website is using the new LetsEncrypt SSL cert and that the expiration is set 90 days out.  Individual browser instructions can be found in the link provided below, however what you’re looking for is something like this:

SSL Cert Info

Instructions on how to view SSL certificate details in each browser can be found at https://www.globalsign.com/en/blog/how-to-view-ssl-certificate-details/


Test and review CertBot auto renewal:

The last thing to do before finishing up is making sure that both the automatic renewal process will work and that it’s scheduled.

To test the auto renewal process run the following on the web server:

sudo certbot renew --dry-run

If successful you can check to see if a scheduled task is set to automatically run the renew process.  By default, Certbot tries to renew the cert once every 12 hours.  The command to renew certbot will be installed in one of the following locations:

To check the status of the certbot including the auto renew cron job run the following command:

sudo tail -50 /var/log/letsencrypt/letsencrypt.log

More information:

 

Install and Configure Memcached

Memcached is a lightweight alternative to Redis for storing short lived cache which would otherwise we written to the local storage folder as files. 

Installing Memcached on Linux is fast and easy. Follow these steps (5 minute job):

1.) As a user with root privileges, enter the following command:

sudo apt-get update;
sudo apt-get install memcached libmemcached-tools php-memcached;

2.) Once the installation is completed, the Memcached service will start automatically. To check the status of the service, enter the following command:

sudo systemctl status memcached

3.) Change the memcached configuration setting for CACHESIZE and -l:

Open /etc/memcached.conf in a text editor.

Locate the -m parameter and change its value to at least 2048 (2GB)

# memory
-m 2048

Locate the -l parameter and confirm its value is set to 127.0.0.1 or localhost

4.) Save your changes to memcached.conf and exit the text editor then restart memcached.

#restart memcached
sudo systemctl restart memcached

#confirm it is running
echo "stats settings" | nc localhost 11211

# check number of cached items
echo "stats items" | nc localhost 11211

5.) Note that on some systems memcached may not automatically start on bootup. In that case use this command to fix:

sudo systemctl enable memcached

6.) Add the php memchached extension

 sudo apt-get install php7.3-memcached;

Configure Laravel to Use Memcached

Laravel is wired to use memcached out-of-the-box. To enable memcached you simply have to add this line to the .env file:

CACHE_DRIVER=memcached

If you need to edit the ports used by memcaches, you can find those setting in config/cache.php.

PHP-FPM Optimization

Out-of-box php fpm is configured for very low server specs such as a 2 core machine. It needs to be configured to match the hardware you are on. You need to factor on the most expensive processes you run.

Typically a low-end production server has 4 cores with 8 GB RAM so you can use the following configuration:

Edit the file /etc/apache2/mods-enabled/mpm-event.conf and add the following:

# event MPM
# StartServers: initial number of server processes to start
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestWorkers: maximum number of worker threads
# MaxConnectionsPerChild: maximum number of requests a server process serves
# <IfModule mpm_event_module>
# 	StartServers			 2
# 	MinSpareThreads		 25
# 	MaxSpareThreads		 75
# 	ThreadLimit			 64
# 	ThreadsPerChild		 25
# 	MaxRequestWorkers	  150
# 	MaxConnectionsPerChild   0
# </IfModule>


#  ServerLimit           (Total RAM - Memory used for Linux, DB, etc.) / process size
#  StartServers          (Number of Cores)
#  MaxRequestWorkers     (Total RAM - Memory used for Linux, DB, etc.) / process size

<IfModule mpm_event_module>
        # for c5 classes with only 8GB ram
        # ServerLimit              500
        StartServers             4
        MinSpareThreads          25
        MaxSpareThreads          75
        ThreadLimit              64
        ThreadsPerChild          25
        MaxRequestWorkers        2800
        # for c5 classes with only 8GB ram
        # MaxRequestWorkers       1400
        MaxConnectionsPerChild   1000
</IfModule>

Edit the file /etc/php/7.4/fpm/pool.d/www.conf and make sure the following setting are there:

; settings explanation - don't need to copy this     
;pm.max_children         (total RAM - (DB etc) / process size)
;pm.start_servers        (cpu cores * 4)
;pm.min_spare_servers    (cpu cores * 2)
;pm.max_spare_servers    (cpu cores * 4)


; default is dynamic but that can churn up the memory because it leave processes lingering
; pm = dynamic
pm = ondemand
; default is pm.max_children = 5
pm.max_children = 256

; everything below is only relevant if using pm = dynamic
; for c class servers with only 8GB ram
; pm.max_children = 128
; default is pm.start_servers = 2
pm.start_servers = 16
; default is pm.min_spare_servers = 1
pm.min_spare_servers = 8
; default is pm.max_spare_servers = 3
pm.max_spare_servers = 16
; setting to 0 or leaving commented out will use the PHP_FCGI_MAX_REQUESTS value whatever that is.
pm.max_requests = 1000

Now we have allowed php to run a lot more threads we may run into a “too many open files” error.

To fix edit /etc/php/7.4/fpm/php-fpm.conf and change the rlimit_files to 4096. If you are still getting the “too many open files” error you can double this.

rlimit_files = 10000

You can also try editing /etc/security/limits.conf and adding the following:

*              hard    nofile      10000
*              soft    nofile      10000
www-data       soft    nofile  10000
www-data       hard    nofile  10000

Restart everything:

sudo service apache2 restart && sudo service php7.4-fpm restart

See also https://medium.com/@sbuckpesch/apache2-and-php-fpm-performance-optimization-step-by-step-guide-1bfecf161534

Automatic AWS EC2 Backups

If you have a lot of developers working on the same server, there is nothing worse than having to fix something that went horribly wrong with it. That is why I wrote a script (see at bottom of this page) to help other developers to back up their AWS EC2 instances daily and set the number of versions to keep. If a developer screws up the server, that is ok. You can just restore a copy from last night.

 

First thing you will need to do is create an AWS IAM user to allow you to specify a backup policy. This user will be restricted to very limited abilities. Once the user has been created apply a policy that just allows backups. I suggest AWSBackupFullAccess . Please avoid using full access policies. They can allow someone to do crazy dangerous things (like spinning up multizone servers $$$ outch).

 

Once you have created a user with the required backup policy, create an Access Key. You will use the generated Access Key and Secret in the script below.

 

Now you can SSH into your EC2 instance (Ubuntu in my case) and install the AWS cli tool.

sudo apt-get -y install awscli; aws configure;

Fill in the appropriate values for the configuration prompts. Remember to use the Key and Secret you just created. You can see an example of what values the config tool expects in the script code below. Make sure you know the region as the backup will only work if the region matched the EC2 instances you are backing up.

Next you need to get the id of the EC2 insance or instances you want to backup. In the examples script below it is only backing up one server but you can do many. Example below.

instances+=("autobackup_developmemt|i-0ed78a1f3583e1543")
instances+=("autobackup_staging|i-0ed72a1f3583e343")

 

Once that is done you are ready to add the script to your server. It will run off a cron. Make sure you put the file someplace this is not accessible to the public obviously (e.g. not in a public website folder).

Make the script executable using the chmod +x command. Then give it a test.

 

Once you know it runs as you can see the AMI (EC2 backup image) created or being created, you can add a cron to automate the backups. Use this command to create or edit the crontab. Note that for ubuntu the typical user is “ubuntu” but for AWS Linux it will be “ec2-user”.

sudo crontab -e -u ubuntu

Add the following line (adjust the path to your script)

# backup EC2 instances nightly 4:30 am GMT
30 4 * * * . $HOME/.profile;  /var/devops/ec2_backup.sh

Now are are done. You can sleep at night knowing that no matter how much someone screws up the server, they won’t screw up your day.


Here the full script:

#!/bin/bash

# prior to using this script you will need to install the aws cli on the local machine

# https://docs.aws.amazon.com/AmazonS3/latest/dev/setup-aws-cli.html

# AWS CLI - will need to configure this
# sudo apt-get -y install awscli 
# example of current config - july 10, 2020
#aws configure
#aws configure set key ARIAW5YUMJT7PO2N7L *fake - user your own*
#aws configure secret X2If+xa/rFITQVMrgdQVpFLx1c7fwP604QkH/x *fake - user your own*
#aws configure set region us-east-2
#aws configure set format json



# backup EC2 instances nightly 4:30 am GMT
# 30 4 * * * . $HOME/.profile;  /var/www/devopstools/shell-scripts/file_backup_scripts/ec2_backup.sh

script_dir="$(dirname "$0")"

# If you want live notifications about backups, use this example with a correct slack key
#SLACK_API_URL="https://hooks.slack.com/services/T6VQ93KM/BT8REK5/hFYEDUCoO1Bw72wxxFSj7oY"


prevday1=$(date --date="2 days ago" +%Y-%m-%d)
prevday2=$(date --date="3 days ago" +%Y-%m-%d)
today=$(date +"%Y-%m-%d")

instances=()
# add as many instances to backup as needed
instances+=("autobackup_impressto|i-0ed78a1f3583e1543")


for ((i = 0; i < ${#instances[@]}; i++)); do

    instance=${instances[$i]}

    instanceName="$(cut -d'|' -f1 <<<"$instance")"
    instanceId="$(cut -d'|' -f2 <<<"$instance")"

    prevImageName1="${instanceName}_${prevday1}_$instanceId"
    prevImageName2="${instanceName}_${prevday2}_$instanceId"
    newImageName="${instanceName}_${today}_$instanceId"

    consoleout --green "Begin backing $instanceName [$instanceId]"

    aws ec2 create-image \
        --instance-id $instanceId \
        --name "$newImageName" \
        --description "$instanceName" \
        --no-reboot

    if [ $? -eq 0 ]; then
        echo "$newImageName created."
        echo ""
        if [ ! -z "${SLACK_API_URL}" ]; then
            curl -X POST -H 'Content-type: application/json' --data '{"text":":rotating_light: Backing up *'$newImageName'* to AMI. :rotating_light:"}' ${SLACK_API_URL}        fi 
        echo -e "\e[92mBacking up ${newImageName} to AMI."
    else
        echo "$newImageName not created."
        echo ""
    fi

    imageId=$(aws ec2 describe-images --filters "Name=name,Values=${prevImageName1}" --query 'Images[*].[ImageId]' --output text)

    if [ ! -z "${imageId}" ]; then

        echo "Deregistering ${prevImageName1} [${imageId}]"
        echo ""
        echo "aws ec2 deregister-image --image-id ${imageId}"
        aws ec2 deregister-image --image-id ${imageId}
    fi

    imageId=$(aws ec2 describe-images --filters "Name=name,Values=${prevImageName2}" --query 'Images[*].[ImageId]' --output text)

    if [ ! -z "${imageId}" ]; then

        echo "Deregistering ${prevImageName2} [${imageId}]"
        echo ""
        echo "aws ec2 deregister-image --image-id ${imageId}"
        aws ec2 deregister-image --image-id ${imagesId}
    fi

    consoleout --green "Completed backing $instanceName"

done

Generally Useful Docker Commands

Remove all Docker Containers

Stop the container(s):

cd ~/mydocker-repo-folder;
docker-compose down;

Delete all containers :

docker rm -f $(docker ps -a -q)

Delete all volumes:

docker volume rm $(docker volume ls -q)

Delete all networks:

docker network rm $(docker network ls -q)

Kill a specific container :

docker container kill [CONTAINER_NAME]

Saving and Restoring Docker Containers

In cases where you cannot for whatever reason build docker containers on your local system, do not fear. Docker allows you to save and import backed up images of containers.

Saving Containers

It is a good habit to routinely save containers. Just open a terminal and use the docker save command. Example here:

docker save -o ~/Desktop/my_docker_image.tar laravel

Once that is saved you can share it with other developers or keep it as a personal backup.  You can also share it with another developer directly using JustBeamIt.

Restoring from a Container Image

If one of your containers is acting wonky, you can get the name and image id with the following command:

docker images

You can see the image name and id in the list. 

DeepinScreenshot_select-area_20200902153536.png

If the container is running, you can shut it all down with “docker-compose down”. Then you can delete the offending container with the docker rm command. Here is an example:

# kill docker compose
cd ~/my-docker-folder;
docker-compose down;

docker image rm 3f8c96702c14

Now you can load a new container to replace the broken one.  To do this you will need to get an image from another developer or use one you previously saved. 

To load the container from the image use the docker load command. Example here:

docker load -i ~/Desktop/my_docker_image.tar

Running multiple services in one container

In my case I want to serve some pages with php and others with nodejs within the same container. This saves a lot of build time and memory. So here is what I add to my Dockerfile

CMD /config_items/run.sh

Then in the file run.sh I start php, nginx and run a nodejs app all in different threads using a single ampersand to run each command in a differrent thread. This lets me run as many processes as needed concurrently.

service php8.0-fpm start & nginx -g 'daemon off;' & cd /var/www/pslamp-blog && npm run start

In cases where you cannot for whatever reason build docker containers on your local system, do not fear. Docker allows you to save and import backed up images of containers.

Saving Containers

It is a good habit to routinely save containers. Just open a terminal and use the docker save command. Example here:

docker save -o ~/Desktop/my_docker_image.tar laravel

Once that is saved you can share it with other developers or keep it as a personal backup. 

Restoring from a Container Image

If one of your containers is acting wonky, you can get the name and image id with the following command:

docker images

docker image rm IDOFBADCONTAINER

Now you can load a new container to replace the broken one.  To do this you will need to get an image from another developer or use one you previously saved. 

To load the container from the image use the docker load command. Example here:

docker load -i ~/Desktop/my_docker_image.tar

Connect to Remote Servers with VSCode

By far one of the coolest VSCode extensios I’ve used in a whole. This saves me so much time when debugging dev /build machines. I also use Nautilus on Linux to browser remote servers but being able to edit code like it is local saved a heck of a lot of time.

Add the remote SSH VSCode extension:  https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh

ext install ms-vscode-remote.remote-ssh

No add an ssh entry on your local ~/.ssh/config file.

Example entry:

Host impressto
HostName 154.343.23.44
User ubuntu
IdentityFile ~/work/keys/mysite.pem

Open a terminal and test to make sure you can SSH in. You can just use for the example config above:

ssh impressto

In VSCode use Ctrl+Shift+P then enter ssh.  Select the first option Remote-SSH:Connect to Host.

Create your own “cloud” storage with Syncthing

I have been using Syncing for years now and had assumed eveyone had at least heard of it. Apparently not. When I do mention it people seem to think is is an impossible thing. It isn’t and it is really easy to setup.

What the heck is Syncthing?

It is an open source (fully) privatem decentralized file system that uses torrent technology to share files between multipe machines/devices. There is no “middleman” to cut the connection so running Syncthing between your own devices really is your own private cloud. It comes with a great GUI and is very easy to use.

Why Syncthing?

Traditional cloud storage is cleap enough that the cost is not prohibitive for most people, at present. In my case most of my backups are for files I won’t look at for years – maybe even decades. A LOT can change in a decade when it comes to online services. Anyone who ever used Panoramio can tell you about the milions of user-uploaded pictures Google simply decided to delete. Point is backups for personal docs, pictures, etc are YOURS and nobody else should be able to decide on how or if they will be stored.

Syncthing allow you to use multiple devices to provide redundancy. If a hard drive on one device fails, you still have copies on other devices. It is also a lot faster than using a cloud service because typically you are only transferring files locally on the same network, although you can share files with any device anywhere in the work if you want to.

Setting up Syncthing on Ubuntu

 

sudo apt install curl apt-transport-https;

curl -s https://syncthing.net/release-key.txt | sudo apt-key add -;

echo "deb https://apt.syncthing.net/ syncthing release" | sudo tee /etc/apt/sources.list.d/syncthing.list;

sudo apt-get update;

sudo apt-get install syncthing;

# replace username with your own system username
sudo systemctl enable syncthing@username.service;

# replace username with your own system username
sudo systemctl start syncthing@username.service;

Once you have completed the commands above you can open the syncthing GUI in your browser with http://127.0.0.1:8384

 

 

Debug PHP with XDebug and VSCode (docker edition)

If you are using Docker you will want to add this to your Dockerfile (runs when container being created).

 

RUN pecl install -f xdebug-2.9.8 \
&& rm -rf /tmp/pear \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini;

 

 

Xdebug configuration

You can tweak the Xdebug configuration on file docker-compose.yml:

The laravel container definition has an environment variable for this purpose

- XDEBUG_CONFIG=remote_host=mysite.docker.laravel remote_port=9000 remote_enable=1 remote_autostart=1 default_enable=1 idekey=VSCODE remote_connect_back=1

Adjust it, in particular the idekey should match the key set in your IDE.

VSCode setup

On VS Code we can use the PHP Debug plugin, once installed we can go to the Debug panel (Ctrl+Shift+D).

SSH Access with Nautilus

If using Linux with Nautilus you can connect directly to the server. 

1.) create a config file in .ssh directory.

sudo gedit ~/.ssh/config

Paste the following and save. You may need to edit the path to your pem files.

Host myserver
HostName 18.216.138.59
User ubuntu
IdentityFile ~/keys/myserver.pem

Now you can connect using terminal with this example:

ssh myserver

on command line will connect to your remote amazon ec2 server without anyother info.

Open Nautilus. press Ctrl+L

there you can type ssh://myserver

press enter.

Note you can also just trasnfer files directly with this example:

scp -i ~/keys/myserver.pem file.txt ubuntu@18.216.138.59:/var/www/mysqldump/.

NameCheap SSL Certificates

Namecheap is as the name suggests; a cheap place to get stuff. Their SSL certificates cost 1/5 of what they cost at Godaddy and are pretty much just as good. There are some odd bugs with the namecheap site. Below are the steps you need to successfully create and deploy an SSL certificate from NameCheap.

1.) Creare a csr file

openssl req -new -newkey rsa:2048 -nodes -keyout mysite.key -out mysite.csr

2.) Go to https://ap.www.namecheap.com/

3.) Upload csr file to namecheap site. This will let you get a validation file.

4.) add the validation file to the root website folder: /.well-known/pki-validation/

This automatcially validates with: http://mysite.com/.well-known/pki-validation/AEF34B001667BF75FD31F090F99754C0.txt

If it fails to validate contact support and they can force it.

https://www.namecheap.com/support/knowledgebase/article.aspx/9464/69/can-i-download-an-issued-certificate-on-your-site

https://www.namecheap.com/support/knowledgebase/article.aspx/9593/33/installing-an-ssl-certificate-on-amazon-web-services-aws/

https://www.namecheap.com/support/knowledgebase/article.aspx/10314/33/ssl-certificate-installation-on-apache2-debian-ubuntu/

5.) At this point you should be ready to add the generated SSL certificate the the server. Download the package.

6.) Add the dowloaded files to your /etc/apache/ssl folder.

7.) Add the config file to your vhost file. It should look something like this:

<VirtualHost *:80>
    ServerName stuff.mysite.com
    DocumentRoot /var/www/stuff/public
</VirtualHost>

<VirtualHost *:443>
    ServerName stuff.mysite.com
    DocumentRoot var/www/stuff/public
	<Directory var/www/stuff/public>
            Options FollowSymLinks
            AllowOverride All
            DirectoryIndex index.php
     </Directory>

	Include /etc/apache2/ssl/mysite_2021/namecheap-ssl-apache.conf
            
</VirtualHost>

8.) Restart the server with sudo service apache2 restart and you should be good. 

Using Cloudfront For CDN

Basic Setup

To setup a CDN using Cloudfront you first need to create an S3 bucket and make it public. In this example we will use pslampdemo.s3.amazonaws.com

Note that when setting up a cloudfront distribution you will need to assign an SSL certificate. See: https://impressto.net/aws-setup-ssl-certificates

Once your public S3 bucket has been created to to the Cloudfront console and create a new distribution. Go to https://console.aws.amazon.com/cloudfront/home?region=us-east-2#create-distribution

  1. Select one of the S3 buckets we are using for the CDN.
  2. For the origin path we will leave it empty so we can use the root folder of the S3 bucket.
  3. Select the HTTP > HTTPS redirect as a precatution to prevent accidental use of assets on HTTP
  4. For alternative domain names add the domain name we will be using for the CDN. This will be added to route53 as a CNAME record.
  5. Select the ssl certificate (this is one we upload ourselves)
  6. Click the Create Distribution button. It takes several minutes for a distribution to generate but that is ok as we have work to do now with route53.
  7. Click on the new distribution to get the url . You can not look for the new domain name for the distribution. It will look something like: dr8thfc1fd2g.cloudfront.net
  8. copy the domain and head over to Route53 – https://console.aws.amazon.com/route53/home?region=us-east-2
  9. Add the CNAME record linking pdlampdemo.com to the cloudfront distribution domain (e.g. dr8thfc1fd2g.cloudfront.net)
  10. That’s it.

Select one of the S3 buckets we are using for the CDN. For the origin path we will leave it empty so we can use the root folder of the S3 bucket. Set a HTTP > HTTPS redirect as a precatution to prevent accidental use of assets on HTTP.

For alternative domain names add the domain name we will be using for the CDN. This will be added to route53 as a CNAME record. Select the ssl certificate (this is one we uploaded ourselves earlier).

Click on the new distribution to get the url . You can not look for the new domain name for the distribution. It will look something like: dr8thfc1fd2g.cloudfront.net.


Copy the domain and head over to Route53 – https://console.aws.amazon.com/route53/home?region=us-east-2#resource-record-sets:Z34JVZ4MBJ5FQW

Add the CNAME record linking pslampdemo.com to the cloudfront distribution domain (e.g. dr8thfc1fd2g.cloudfront.net). After saving that you will be able to access the S3 assets with the CDN domain.

Enabling CORS for CDN
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-cors-configuration.html
https://aws.amazon.com/premiumsupport/knowledge-center/no-access-control-allow-origin-error/

You will needed to enable CORS for the s3 bucket. Navigate to the S3 bucket on AWS and click the Cors Configuration button.

Add the following XML and save

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Enable Header Whitelisting in the Cloudfront Distribution

To forward the headers from your CloudFront distribution, follow these steps:

  1. Open your distribution from the CloudFront console.
  2. Choose the Behaviors tab.
  3. Choose Create Behavior, or choose an existing behavior, and then choose Edit.
  1. For Cache Based on Selected Request Headers, choose Whitelist.
  2. Under Whitelist Headers, choose headers from the menu on the left, and then choose Add.

Enable GZip Compression

By default gzip compression is off. To turn it on you will need to edit the behavoir setting for the Cloudfront distribution.

  1. Select the distribution and click the “Distribution Settings” button
  2. Select the “Behaviors” tab then click the “Edit” button
  3. Set the cache policy to “Managed-CacheOptimization”

Enabling Gzip Compresson

Open the cloudfront page then select the distribution.

Click the behavoir tag.

Select the behavior and click Edit

Set the Cache Policy to Managed-CachingOptimized and turn on Compress Object Automatically.


Invalidating Cloudfront Files

To clear the files from cache (ssl update of emergency fixes after a deployment) follow these steps:

  1. Go to the cloudfront distribution page, select the distribution page.
  2. Select the distribution for which you want to invalidate files.
  3. Choose Distribution Settings.
  4. Choose the Invalidations tab.
  5. Choose Create Invalidation.
  6. For the files that you want to invalidate, enter one invalidation path per line. For information about specifying invalidation paths, see Specifying the Files to Invalidate.
  7. Important
    Specify file paths carefully. You can’t cancel an invalidation request after you start it.
  8. Choose Invalidate.

How S3 paths become CDN paths

This s3 path

https://pslampdemo.s3.us-east-2.amazonaws.com/website/images/kitten.png

now works as:

https://dr8thfc1fd2g.cloudfront.net/website/images/broadcast-marketplace.png

which in turn with a cname record in route53 becomes:

https://pslampdemo.com/website/images/broadcast-marketplace.png

Additional info:

https://console.aws.amazon.com/cloudfront/home?region=us-east-2#create-distribution:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-alias.html#rrsets-values-alias-alias-target
https://aws.amazon.com/blogs/aws/new-gzip-compression-support-for-amazon-cloudfront/

Using CDN for WordPress

We use the wp-offload plugin to host wordpress media files on S3. This allows us to upload images to a standard folder which is then accesible via CDN.

Make sure to read the instruction and set your CDN url accordingly.

Connect to S3 from your local Ubuntu file system

For Mac and Linux you can connect to s3 buckets from your local file navigator using  s3fs

https://cloud.netapp.com/blog/amazon-s3-as-a-file-system

Here are the commands you need for Ubuntu. Replace BUCKETNAME with the name of your S3 bucket.

cd ~/;

# for Debian (Ubuntu)
sudo apt-get install s3fs;

# for mac use  Brew install s3fs 

echo ACCESS_KEY:SECRET_KEY > ~/.passwd-s3fs;
# confirm entry was added
cat ~/.passwd-s3fs;
chmod 600 .passwd-s3fs;
mkdir ~/BUCKETNAME-s3-drive;
s3fs BUCKETNAME ~/BUCKETNAME-s3-drive;

you can then navigate the s3 drive as a local drive 

Protecting wp-admin from bots

The most common attack on a wordpress site it the login page. Weak or compromised passwords are used by automated bots that will hit thousands of sites a day trying multiple username/password combinations.

In this article I will show you how to use .htaccess with nginx on Unbunt (or any Debian system) to prevent bots from accessing your WordPress login url.

 

First of all install apache2-utils:

sudo apt-get update -y;
sudo apt-get install -y apache2-utils;

Create a .htpassed file

 sudo htpasswd -c /var/www/.htpasswd mysiteadminusernameamajigger

Edit your /etc/nginx/sites-available/vhost file to add:

	location /wp-login.php {
    	    auth_basic       "Administrators Area";
	    auth_basic_user_file /var/www/.htpasswd; 
	}

	location /wp-admin {
    	    auth_basic       "Administrators Area";
	    auth_basic_user_file /var/www/.htpasswd; 
	}

Full example of my own file :

server {

    root /var/www/impressto.net;
    index index.php index.html index.nginx-debian.html;
    server_name impressto.net www.impressto.net;

    location / {
        root /var/www/impressto.net;
        if (!-e $request_filename) {
            rewrite ^/(.*)$ /index.php?q=$1 last;
        }
    }

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.4-fpm.sock;
    }

    location /wp-login.php {
        auth_basic       "Administrators Area";
        auth_basic_user_file /var/www/.htpasswd; 
    }

    location /wp-admin {
        auth_basic       "Administrators Area";
        auth_basic_user_file /var/www/.htpasswd; 
    }

}

Now test your config:

sudo nginx -t;

If no errors shown restart nginx

sudo systemctl restart nginx;

Now it you go to your wp-admin url you will get a blocking password prompy. This will block most automated bots.

Skip Password Prompts for Sudo commands

When administrating a development machine or server you may find yourself needlessly entering sudo password. On a production machine this is something you’d want but for a local or develpment machine not so much.

Here’s how you can bypass the password:

Open the /etc/sudoers file (as root, of course!) by running:

sudo visudo

Note you should never edit /etc/sudoers with a regular text editor, such as Vim or nano, because they do not validate the syntax like the visudo editor.

At the end of the /etc/sudoers file add this line replacing username with your actual username:

username     ALL=(ALL) NOPASSWD:ALL

Save the file and exit with <ESC>wq. If you have any sort of syntax problem, visudo will warn you and you can abort the change or open the file for editing again.It is important to add this line at the end of the file, so that the other permissions do not override this directive, since they are processed in order.

 

Note that for mac the save steps are a little different because mac uses vim for visudo edits.  Press the Escape key. Then, type :wq and press enter. This saves the file and quits vim.

Finally, open a new terminal window and run a command that requires root privileges, such as sudo apt-get update. You should not be prompted for your password!

Fix Localhost Binding for Safari

Safari does not automatically bind *.localhost domains to 127.0.0.1. To use Safari for local development and especially when using docker with SSL you will need to add the entries to your /etc/hosts file. Here is an example: 

sudo nano /etc/hosts
# add the following entries
127.0.0.1       mysite.localhost
127.0.0.1       somesubdomain.mysite.localhost

Create an SSH Key for Git

SSH keys are not just for Git but if you want to use SSH cloning for git, yeah you need em.

To create a new SSH key pair do the following:

1.) Open a terminal on Linux or macOS, or Git Bash / WSL on Windows.

2.) Generate a new ED25519 SSH key pair:

ssh-keygen -t rsa -b 2048 -C "username@mysite.com" 
or ssh-keygen -t ed25519 -C "username@mysite.com"

3.)  Use the defaults for all options if you like. Doesn’t matter.

Copying SSH Key to Gitlab

Go into the  ~/.ssh folder. On Mac you may need to do the following to see the .ssh folder:

# open the finder dialog
Command+Shift+G 
# enter ~/.ssh

# view hidden files
Command+Shift+.

On Linux:

cd ~/.ssh

# on your keyboard hit Ctrl +h

Once you can see the hidden files you should see a file named id_rsa.pub or something like that. It ends with .pub. Open that file with a text editor and you will see the SSH key you need to copy to your own gitlab account.

Using a Access Token (works too but yuk!)

If you are not using an SSH connection you may need to create a personal access token (image below). Make sure you save the token on your local machine as you will not be able to retreive it once you close the page where you created it on Gitlab.

To clone a repo using an access token, it is similar to cloning with https but the url is slightly different. If your token is for example xSx81KqnADs-mZ4JviHa, the cloning command will be 

git clone https://oauth2:xSx81KqnADs-mZ4JviHa@gitlab.com/myaccount/myrep.git

If you were previously using https with a username and password, you will need to update the remote url on your local machine. Once you have created the access token you will need to change the remote origin of your local repo to add the access token.  Here is an example of the old remote url and a new one

# old url
https://somegitsite.com/mycompany/mysite.git

# new url with access token
https://oauth2:uggU-s2usayJtiqguEAQ@somegitsite.com/mycompany/mysite.git

To set the remote url use the following command as an example:

git remote set-url origin https://oauth2:AmDAyXHEVxyEBf3fbg@somegitsite.com/mycompany/mysite.git

Install Mkcert for SSL on Localhost

mkcert_on_local.jpg

Mkcert is a simple tool for generating locally-trusted development SSL/TLS certificates. It requires minimal configuration.

https://github.com/FiloSottile/mkcert

Prerequisites (Ubuntu / Debian)

Make sure you have:

To install on Debian (Ubuntu) use the following commands:

sudo apt install libnss3-tools

Install LinuxBrew  – get the installer

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Enable it

test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile

Install mkcert

brew install mkcert;
mkcert -install;

If at this point you are getting a mkcert command not found , or Warning: /home/linuxbrew/.linuxbrew/bin is not in your PATH. you many need to fix  the global PATH var to include the mkcert bin folder. Edit your ~/$HOME/.profile file and add the following :

if [ -d "/home/linuxbrew/.linuxbrew/bin" ] ; then
    PATH="/home/linuxbrew/.linuxbrew/bin:$PATH"
fi

Save the .profile file and from the terminal run source ~/.profile

An alternative way to install Mkcert is the following (not fully tested by me): 

sudo apt-get update
sudo apt install wget libnss3-tools
set MCVER="v.1.4.1"
wget -O mkcert https://github.com/FiloSottile/mkcert/releases/download/${MCVER}/mkcert-${MCVER}-linux-amd64
chmod +x mkcert
sudo mv mkcert /user/local/bin

Why This Matters

Using mkcert lets you emulate HTTPS in your local environment, which helps with catching mixed content issues, testing secure cookies, HSTS, etc., before you deploy to production.

 

Apache Tricks

Set Server Agent Name

sudo apt-get install libapache2-mod-security2

Once the module is installed, you can modify the Apache config under the file /etc/apache2/apache2.conf. Add this line around the end of the file.

<IfModule mod_security2.c>
SecServerSignature "ecoware"
</IfModule>

How to set the Expires Headers in Apache

enable expires and headers modules for Apache

sudo a2enmod expires;
sudo a2enmod headers;

Edit the /etc/apache2/apache2.conf file

sudo nano /etc/apache2/apache2.conf

Add the following

<IfModule mod_expires.c>
ExpiresActive on
AddType image/x-icon .ico
ExpiresDefault "access plus 2 hours"
ExpiresByType text/html "access plus 7 days"
ExpiresByType image/gif "access plus 7 days"
ExpiresByType image/jpg "access plus 7 days"
ExpiresByType image/jpeg "access plus 7 days"
ExpiresByType image/png "access plus 7 days"
ExpiresByType text/js "access plus 2 hours"
ExpiresByType text/javascript "access plus 2 hours"
ExpiresByType text/plain "access plus 2 hours"
ExpiresByType image/x-icon "access plus 30 days"
ExpiresByType image/ico "access plus 30 days"
</IfModule>

Restart apache

sudo service apache2 restart

Check that it worked by loading an image.  You should see an expired line in the output such as

Expires: Wed, 22 Aug 2020 22:03:35 GMT

See: https://hooshmand.net/fix-set-expires-headers-apache/

NodeJS Proxy via Apache

Here is how to serve nodejs entry points by using an Apache proxy. This hides the port number and the nodeJs entry points simply appear as part of the “monolithic” application. 

WINDOWS:

Setup is easy:

Include "conf/extra/httpd-proxy.conf"
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_http_module modules/mod_proxy_http.so

2.) Open your proxy config file C:\xampp\apache\conf\extra\httpd-proxy.conf. Edit it to match the following: 

<IfModule proxy_module>
     <IfModule proxy_http_module>
         ProxyRequests On
         ProxyVia On

      <Proxy *>
          Order deny,allow
          Allow from all
       </Proxy>
 
       ProxyPass /node http://127.0.0.1:3000/
       ProxyPassReverse /node http://127.0.0.1:3000/
   </IfModule>
</IfModule>

3.) Open your vhosts file C:\xampp\apache\conf\extra\httpd-vhosts.conf. Add the following:

 <VirtualHost *:*>
   ProxyPreserveHost On
   ProxyPass "/node" "http://127.0.0.1:3000/"
   ProxyPassReverse "/node" "http://127.0.0.1:3000/"
   ServerName api.impressto.net
</VirtualHost>

4.) Restart Apache.

from command terminal run:

sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequest
sudo service apache2 restart

2.) Open vhosts file /etc/apache2/sites-available/api.impressto.net.conf. Add the following:

<VirtualHost *:443>
  ServerAdmin admin@impressto.net
  ServerName impressto.localhost
  DocumentRoot /var/www/impressto.localhost/public

  ProxyPreserveHost On
  ProxyPass / http://127.0.0.1:8000/
  ProxyPassReverse / http://127.0.0.1:8000/

  ErrorLog ${APACHE_LOG_DIR}/error.log
  CustomLog ${APACHE_LOG_DIR}/access.log combined
  SSLEngine on
  SSLCertificateFile /etc/apache2/ssl/apache.crt
  SSLCertificateKeyFile /etc/apache2/ssl/apache.key

  <Directory /var/www/impressto.localhost/public> 
  Options Indexes FollowSymLinks MultiViews
  AllowOverride All
  Require all granted
  </Directory>

</VirtualHost>

..

SQL simplified

464839105_2251428365242267_4073966686533682812_n.jpg

Setup a WebSocket Server with Cloudflare

1. What WebRTC is

WebRTC (Web Real-Time Communication) is a set of APIs built into modern browsers that lets two peers (e.g., two users in React apps) connect directly to each other to exchange:

  • Audio/video streams

  • Data (via a “data channel”) — chat messages, files, game state, etc.

The magic: it works peer-to-peer, not through a central server (though servers are still used to help them connect).


2. The Core Pieces

For two apps to talk over WebRTC, you need:

a) Signaling

  • Before peers connect, they must exchange “connection setup” info (called SDP offers/answers and ICE candidates).

  • This is usually done via a server using WebSockets, HTTP POST, or any other channel you choose.

  • Example: your React app might send the connection info through a Node.js/Express WebSocket server.

b) ICE / STUN / TURN

  • WebRTC peers must figure out how to reach each other across the internet (even behind NAT/firewalls).

  • STUN servers: help discover the public IP/port of each peer.

  • TURN servers: relay data if direct P2P fails (fallback).

  • WebRTC handles this automatically if you give it the server addresses.

c) PeerConnection

  • In code: new RTCPeerConnection()

  • This object manages the whole connection, media, and data.

d) Data Channel

  • In code: peerConnection.createDataChannel("chat")

  • Lets you send arbitrary text or binary data → perfect for chat and file transfers.


3. Flow of a Connection

Here’s what happens when User A chats with User B:

  1. A creates an RTCPeerConnection.

  2. A creates a data channel (chat).

  3. A creates an SDP offer (basically: "here’s what I support").

  4. A sends the offer to B via your signaling server.

  5. B receives the offer, creates an RTCPeerConnection, and sets it as remoteDescription.

  6. B creates an SDP answer (like: "ok, here’s what I support").

  7. B sends the answer back to A via the signaling server.

  8. Both A and B exchange ICE candidates until they find a working route.

  9. Connection established 🎉

    • Now data (chat messages, files) or media (audio/video streams) flows directly between browsers.


4. Chat Example (DataChannel)


// Peer A const pc = new RTCPeerConnection(); const channel = pc.createDataChannel("chat"); channel.onmessage = (event) => { console.log("Got message:", event.data); }; channel.onopen = () => { channel.send("Hello from A!"); };

On Peer B:


const pc = new RTCPeerConnection(); pc.ondatachannel = (event) => { const channel = event.channel; channel.onmessage = (event) => { console.log("Got message:", event.data); }; };

5. File Transfer Example

WebRTC DataChannels support binary blobs, so you can send files chunk-by-chunk:


// Sender function sendFile(file, channel) { const chunkSize = 16384; // ~16 KB chunks let offset = 0; const reader = new FileReader(); reader.onload = (e) => { channel.send(e.target.result); offset += e.target.result.byteLength; if (offset < file.size) { readSlice(offset); } else { channel.send("EOF"); // signal end of file } }; function readSlice(o) { const slice = file.slice(o, o + chunkSize); reader.readAsArrayBuffer(slice); } readSlice(0); }

Receiver just listens for binary data and reconstructs the file.


6. Why This Is Cool

  • Low latency → no central server relaying chat or files.

  • Efficient file transfer → can be faster than traditional uploads.

  • Works in all modern browsers (React can just wrap the APIs).


👉 So in your React app, the “chat + file transfer” works because:

  • You use WebRTC DataChannels to send text or binary data.

  • A signaling server helps the two peers find each other and exchange connection info.

  • Once connected, the peers exchange messages/files directly.



1. What signaling needs

Signaling is just about exchanging:

  • SDP offers/answers (connection descriptions)

  • ICE candidates (network routes)

This requires bidirectional, asynchronous communication between peers (via your server).


2. Options compared

✅ WebSockets

  • Persistent, bidirectional connection → server and client can both push messages.

  • Handles real-time signaling well.

  • Scales decently (lots of libraries and servers like Socket.IO).

  • Good match for WebRTC’s event-driven flow (offer → answer → ICE candidates).

⚠️ HTTP (long polling / REST)

  • Works, but is clunky:

    • Client has to poll server repeatedly for new messages.

    • Higher latency (not great if you want snappy connection setup).

    • Server load is heavier (more requests).

  • Might make sense if:

    • You already have an HTTP-only environment.

    • You’re connecting infrequently and can tolerate delay.


3. Hybrid approach

Some apps start with HTTP POST to register a peer (like “here’s my offer”) and then switch to WebSocket or another channel for ongoing exchange. But for your React chat/file transfer app, that’s overkill — just stick with WebSockets.


4. Rule of thumb

  • If real-time matters (chat, file transfer, video, games): → WebSockets.

  • If it’s rare / one-off signaling (like joining a Zoom meeting via invite link): → HTTP can work, but WebSockets still nicer.


👉 So for your use case (chat + file sharing between React apps), WebSockets are definitely the better choice for signaling.

Would you like me to show you a minimal WebSocket + WebRTC signaling server (Node.js) that you could plug into your React app as a starting point?

 


1. Why you can use PHP

  • WebRTC doesn’t care what language your signaling server is in.
    All it needs is a way to exchange JSON messages (offers, answers, ICE candidates).

  • WebSockets are just a protocol — PHP can handle them.


2. Options for PHP WebSocket servers

  • Ratchet (most popular PHP WebSocket library)

    • Built on ReactPHP (event loop).

    • Handles multiple connections easily.

  • Workerman (another solid async PHP framework).

  • Native PHP WebSocket scripts exist, but they’re much more limited.


3. Minimal Ratchet Example

Install via Composer:


composer require cboden/ratchet

Server (server.php):


<?php use Ratchet\MessageComponentInterface; use Ratchet\ConnectionInterface; require __DIR__ . '/vendor/autoload.php'; class SignalingServer implements MessageComponentInterface { public function onOpen(ConnectionInterface $conn) { echo "New connection: {$conn->resourceId}\n"; } public function onMessage(ConnectionInterface $from, $msg) { // Broadcast to all clients except sender foreach ($from->httpRequest->getUri()->getHost() as $client) { if ($from !== $client) { $client->send($msg); } } } public function onClose(ConnectionInterface $conn) { echo "Connection {$conn->resourceId} closed\n"; } public function onError(ConnectionInterface $conn, \Exception $e) { echo "Error: {$e->getMessage()}\n"; $conn->close(); } } $server = Ratchet\App('localhost', 8080); $server->route('/signaling', new SignalingServer, ['*']); $server->run();

Run it:


php server.php

4. Connecting from React

In your React app:


const ws = new WebSocket("ws://localhost:8080/signaling"); ws.onopen = () => { console.log("Connected to signaling server"); }; // Send offer/answer/ICE candidate ws.send(JSON.stringify({ type: "offer", sdp: offer })); ws.onmessage = (event) => { const message = JSON.parse(event.data); console.log("Got signaling message", message); };

5. Is PHP the best choice?

  • If your app is already running PHP (LAMP stack, WordPress, Laravel, etc.) → Ratchet is fine.

  • If you want scalability or lots of concurrent users → Node.js / Go / Python WebSocket servers perform better.


 


1. Does Cloudflare support WebSockets?

✅ Yes.

  • Cloudflare has supported WebSocket proxying for years on all plans (Free, Pro, Business, Enterprise).

  • Your WebSocket server just needs to listen on a port Cloudflare allows (usually 80/443 through HTTPS/WSS).


2. How Cloudflare Tunnel affects it

  • Cloudflare Tunnel (cloudflared) creates a secure outbound-only connection from your server → Cloudflare → the internet.

  • If your React app connects to wss://yourdomain.com/signal, Cloudflare terminates TLS and then forwards the WebSocket traffic through the tunnel to your server.

  • From your server’s perspective, it just sees a normal WebSocket connection.

So yes, your PHP WebSocket server will work fine through Cloudflare Tunnel.


3. Things to watch out for

  • Long-lived connections:
    WebSockets rely on staying open. Cloudflare supports long-lived connections (up to 24 hours on free tier, unlimited on Enterprise). If a connection drops, your app should be able to reconnect.

  • Ports:
    If you’re behind Cloudflare, you generally can’t just expose random ports like :8080 directly.
    Instead:

    • Run your WebSocket server on :443 (wss) or :80 (ws).

    • Or let cloudflared map wss://yourdomain.com/signal → your local server’s localhost:8080.

  • Performance:
    WebRTC media/data traffic does not flow through Cloudflare — only the signaling WebSocket does.
    Once peers connect P2P, audio/video/files bypass Cloudflare.
    (Exception: if TURN relay is needed, then that server must be public too, and Cloudflare isn’t involved.)


4. Example Setup

  1. Run your PHP Ratchet WebSocket server on localhost:8080.

  2. Start a Cloudflare tunnel:


    cloudflared tunnel run my-tunnel
  3. In Cloudflare dashboard, map a subdomain (e.g., signal.example.com) to http://localhost:8080.
    Cloudflare will handle upgrading HTTP → WebSocket automatically.

  4. In your React app:


    const ws = new WebSocket("wss://signal.example.com");

✅ Result:

  • React app → Cloudflare → tunnel → PHP WebSocket server.

  • WebRTC peers connect directly (only signaling goes through Cloudflare).



1. Prerequisites

  • A domain managed by Cloudflare (e.g. example.com)

  • Cloudflare Tunnel (cloudflared) installed on your server

  • Your PHP WebSocket server running locally on port 8080 (Ratchet example earlier)


2. Create a named tunnel


cloudflared tunnel create signaling-tunnel

This generates a tunnel UUID and credentials file, usually at:


~/.cloudflared/<tunnel-uuid>.json

3. Configure the tunnel (YAML)

Create a config file at ~/.cloudflared/config.yml:


tunnel: signaling-tunnel credentials-file: /home/youruser/.cloudflared/<tunnel-uuid>.json ingress: - hostname: signal.example.com service: http://localhost:8080 - service: http_status:404

Explanation:

  • hostname: the public domain/subdomain you want (signal.example.com)

  • service: the local service (your PHP WebSocket server on localhost:8080)

  • The last catch-all rule (http_status:404) ensures requests not matching go nowhere


4. Connect the tunnel to Cloudflare

Run:


cloudflared tunnel route dns signaling-tunnel signal.example.com

This tells Cloudflare DNS to point signal.example.com → your tunnel.


5. Run the tunnel

Foreground (debugging):


cloudflared tunnel run signaling-tunnel

Background (production):


sudo systemctl enable cloudflared sudo systemctl start cloudflared

6. React client connection

Now in your React app, connect securely:


const ws = new WebSocket("wss://signal.example.com"); ws.onopen = () => { console.log("Connected to signaling server via Cloudflare Tunnel"); };

7. Important notes

  • Use wss:// (not plain ws://) — Cloudflare will terminate TLS for you.

  • Cloudflare automatically upgrades HTTP → WebSocket, so you don’t need any special handling in PHP.

  • Your WebSocket server doesn’t need to know about Cloudflare; it just sees normal connections.

  • Only signaling traffic goes through Cloudflare. The actual WebRTC peer-to-peer data (chat, files, media) bypasses Cloudflare once established.


👉 With this setup, you get:

  • Secure WebSocket signaling (wss://signal.example.com)

  • Automatic TLS from Cloudflare

  • No need to expose random ports — everything runs through 443






WebRTC signaling with PHP (Ratchet) + React client

This document contains a minimal, production-aware example showing:

  • A PHP WebSocket signaling server using Ratchet that supports rooms and direct peer-to-peer routing (offers/answers/ICE).

  • A React client snippet (hooks) that shows how to use the signaling server to exchange SDP and ICE and establish a WebRTC DataChannel for chat + file transfer.

  • Notes on Cloudflare Tunnel integration and production recommendations.


1) Server: Ratchet-based signaling server

Install:


composer require cboden/ratchet

server.php


<?php
// server.php
require __DIR__ . '/vendor/autoload.php';

use Ratchet\MessageComponentInterface;
use Ratchet\ConnectionInterface;
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;

class SignalingServer implements MessageComponentInterface {
// Map of resourceId => connection
private $clients;
// rooms: roomName => [clientId => connection]
private $rooms = [];

public function __construct() {
$this->clients = new \SplObjectStorage;
echo "Signaling server started\n";
}

public function onOpen(ConnectionInterface $conn) {
$this->clients->attach($conn);
// store metadata on the connection object
$conn->clientId = null;
$conn->room = null;
echo "New connection: {$conn->resourceId}\n";
}

public function onMessage(ConnectionInterface $from, $msg) {
$data = json_decode($msg, true);
if (!$data) return;

switch ($data['type'] ?? '') {
case 'join':
// { type: 'join', room: 'room1', clientId: 'alice' }
$room = $data['room'];
$clientId = $data['clientId'];
$from->clientId = $clientId;
$from->room = $room;
if (!isset($this->rooms[$room])) $this->rooms[$room] = [];
$this->rooms[$room][$clientId] = $from;

// Notify other participants about new peer
foreach ($this->rooms[$room] as $id => $conn) {
if ($conn !== $from) {
$conn->send(json_encode([
'type' => 'peer-joined',
'clientId' => $clientId,
]));
}

How it works

  • Clients join a room with a unique clientId.

  • When sending signaling messages (SDP/ICE), clients send type: 'signal' and include to and payload.

  • The server routes signal messages only to the intended recipient inside the same room.


2) React client (hooks) — minimal working flow

This is a stripped-down React hook and helper to show the signaling flow. It focuses on DataChannel (chat + files) but can handle media tracks too.


// useWebRTC.js
import { useEffect, useRef, useState } from 'react';

export default function useWebRTC({ signalingUrl, room, clientId }) {
const pcRef = useRef(null);
const wsRef = useRef(null);
const dataChannelRef = useRef(null);
const [connectedPeers, setConnectedPeers] = useState([]);

useEffect(() => {
const ws = new WebSocket(signalingUrl);
wsRef.current = ws;

ws.onopen = () => {
ws.send(JSON.stringify({ type: 'join', room, clientId }));
};

ws.onmessage = async (evt) => {
const msg = JSON.parse(evt.data);
if (msg.type === 'peer-joined') {
// a new peer arrived — you may choose to offer immediately or wait
setConnectedPeers((p) => [...p, msg.clientId]);
}

if (msg.type === 'peer-left') {
setConnectedPeers((p) => p.filter(id => id !== msg.clientId));
}

if (msg.type === 'signal') {
const { from, payload } = msg;
await handleSignal(from, payload);
}
};

ws.onclose = () => console.log('signaling closed');

return () => {
ws.close();
};
}, [signalingUrl, room, clientId]);

function sendToServer(obj) {
wsRef.current?.send(JSON.stringify(obj));
}

async function createPeerConnection(targetClientId, isInitiator = false) {
const pc = new RTCPeerConnection({
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }]
});

pc.onicecandidate = (e) => {
if (e.candidate) {
sendToServer({
type: 'signal',
to: targetClientId,
from: clientId,
payload: { type: 'ice', candidate: e.candidate }
});
}
};


Notes on file transfer

  • Use a chunked approach (e.g. 16KB slices) and send ArrayBuffers over the data channel. Include headers like { fileId, seq, total, meta } in the binary protocol or send JSON control messages.

  • Always respect dataChannel.bufferedAmount to avoid memory spikes (pause sending until it drains).


3) Signaling message format

Use small JSON envelopes. Examples used above:

  • Join: { type:'join', room:'room1', clientId:'alice' }

  • Server -> peer-joined: { type:'peer-joined', clientId:'alice' }

  • Signal (client->server->client): { type:'signal', to:'bob', from:'alice', payload: { type:'offer'|'answer'|'ice', sdp?, candidate? } }

This keeps routing simple and deterministic.


4) Cloudflare Tunnel reminder

  • Your React client should connect to wss://signal.example.com (wss required).

  • cloudflared maps signal.example.com -> http://localhost:8080 (or port you choose).

  • Cloudflare will proxy WebSocket frames to your Ratchet server; server code does not need Cloudflare-specific changes.


5) Production recommendations & extras

  • Authentication: require clients to authenticate (JWT) before joining a room. Pass token in the initial join message or as a query string on the wss:// URL (use secure cookies or headers where possible).

  • Scaling: Ratchet on a single server is fine for modest scale. For many concurrent clients you'll need sharding or a pub/sub (Redis) to coordinate messages across multiple instances.

  • TURN server: WebRTC media/data is P2P. If peers are behind symmetric NATs, include a TURN server (coturn) in your ICE config and make it publicly reachable.

  • Keepalive / reconnection: implement ping/pong and auto-reconnect logic for WS.

  • Logging/metrics: log joins/leaves and error rates; track active sockets.


6) How to run locally

  1. Start PHP server: php server.php (ensure composer install executed).

  2. Start cloudflared tunnel run signaling-tunnel (if using Cloudflare tunnel).

  3. Serve your React app and point signalingUrl to wss://signal.example.com or ws://localhost:8080 for local dev.


If you want, I can also:

  • Provide a more complete React example (UI + file send/receiv


Done — I created a code + instructions document in the canvas titled Webrtc-php-ratchet-signaling-and-react. It includes:

  • A Ratchet PHP signaling server that supports rooms and routing offer / answer / ice messages to specific peers.

  • A compact React hook (useWebRTC) showing how to join, create offers/answers, exchange ICE, and open a DataChannel for chat/files.

  • File-transfer tips, message formats, Cloudflare Tunnel reminders, and production recommendations.

Open the canvas (the document named Webrtc-php-ratchet-signaling-and-react) to copy the server and client code and follow the run instructions.


1. Do you need special Nginx config?

It depends how you’re exposing your WebSocket server:

  • If you’re using Cloudflare Tunnel (as we outlined earlier):

    • Nginx doesn’t see the WebSocket traffic at all — cloudflared maps your domain directly to the Ratchet server on localhost:8080.

    • No special Nginx config is needed for signaling. Nginx can still serve your React app or APIs on the same domain or another subdomain.

  • If you want Nginx to proxy WebSockets to Ratchet (without Cloudflare):

    • You need to enable upgrade headers in the proxy config. Example:


      server { listen 443 ssl; server_name signal.example.com; ssl_certificate /etc/letsencrypt/live/signal.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/signal.example.com/privkey.pem; location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }

      Key part is:


      proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade";

      That allows Nginx to upgrade the HTTP request to a WebSocket connection.


2. Does the PHP Ratchet server need to be persistent?

Yes ✅

  • Ratchet is an event loop server (like Node.js).

  • Unlike normal PHP web apps (short-lived scripts under FPM/Apache), this must run as a long-lived process because it maintains open WebSocket connections.

That means:

  • You run it via php server.php, and it stays running.

  • You should manage it like any daemon:

    • With systemd (recommended): create a unit file so it auto-restarts and runs on boot.

    • Or with supervisord, pm2, or Docker.

Example systemd unit (/etc/systemd/system/ratchet.service):


[Unit] Description=Ratchet PHP WebSocket Signaling Server After=network.target [Service] ExecStart=/usr/bin/php /var/www/signaling/server.php Restart=always User=www-data WorkingDirectory=/var/www/signaling StandardOutput=syslog StandardError=syslog SyslogIdentifier=ratchet [Install] WantedBy=multi-user.target

Then:


sudo systemctl daemon-reload sudo systemctl enable ratchet sudo systemctl start ratchet

3. Summary

  • With Cloudflare Tunnel → no Nginx config needed.

  • With Nginx reverse proxy → add proxy_set_header Upgrade + Connection "Upgrade" for WebSockets.

  • Ratchet must run persistently → treat it like a daemon, not like a normal PHP-FPM request/response cycle.




🔑 Why Node.js fits better than PHP Ratchet for signaling

  1. Event-driven runtime

    • Node was built for long-lived connections (like WebSockets).

    • You don’t need to think about keeping PHP alive or managing a daemon separately.

  2. Ecosystem

    • ws (a very popular WebSocket library) is lightweight and fast.

    • Tons of tutorials, examples, and tools for WebRTC signaling in Node.

  3. Deployment simplicity

    • Easier to containerize with Docker.

    • Works seamlessly with Cloudflare Tunnel (no Nginx needed, just tunnel directly to your Node app).

  4. Code symmetry

    • Your React frontend is already JS/TS → using Node on the backend means you can reuse types and data structures.


🟢 Minimal Node.js WebSocket signaling server


// server.js import { WebSocketServer } from 'ws'; const wss = new WebSocketServer({ port: 8080 }); const rooms = new Map(); wss.on('connection', (ws) => { ws.on('message', (msg) => { const data = JSON.parse(msg); const { type, room, payload } = data; if (type === 'join') { if (!rooms.has(room)) rooms.set(room, new Set()); rooms.get(room).add(ws); ws.room = room; } // Broadcast messages to everyone else in the same room if (['offer', 'answer', 'ice', 'chat', 'file'].includes(type)) { for (let client of rooms.get(room) || []) { if (client !== ws && client.readyState === ws.OPEN) { client.send(JSON.stringify({ type, payload })); } } } }); ws.on('close', () => { if (ws.room && rooms.has(ws.room)) { rooms.get(ws.room).delete(ws); if (rooms.get(ws.room).size === 0) { rooms.delete(ws.room); } } }); }); console.log("WebSocket signaling server running on ws://localhost:8080");

🔄 React client-side example


const ws = new WebSocket("wss://signal.example.com"); ws.onopen = () => { ws.send(JSON.stringify({ type: "join", room: "chat1" })); }; // Send offer/answer/ice ws.send(JSON.stringify({ type: "offer", room: "chat1", payload: offer }));

🚀 Deployment with Cloudflare Tunnel

  • Run the server on your host: node server.js (listening on port 8080).

  • Tunnel it:


    cloudflared tunnel --url ws://localhost:8080
  • Cloudflare handles SSL and exposure, no Nginx needed.


✅ So in your case:

  • Keep Nginx + PHP for your normal APIs / website.

  • Run a separate Node.js WebSocket signaling server for WebRTC.

  • Expose it with Cloudflare Tunnel (no special Nginx config).




🔧 Options if port 8080 is already taken

  1. Pick a different local port (simplest)

    • Run your Node signaling server on, say, localhost:9000 or localhost:7000.

    • Example:


      const wss = new WebSocketServer({ port: 9000 });
    • Expose it with Cloudflare Tunnel:


      cloudflared tunnel --url ws://localhost:9000
    • Cloudflare doesn’t care which local port you use — you can map it to any public subdomain.


      ingress: - hostname: signal.example.com service: ws://localhost:9000

  1. Run multiple services behind one Cloudflare Tunnel

    • Cloudflare Tunnel supports routing rules.

    • Example config.yml:


      tunnel: my-tunnel-id credentials-file: /root/.cloudflared/my-tunnel.json ingress: - hostname: signal.example.com service: ws://localhost:9000 - hostname: api.example.com service: http://localhost:8080 - service: http_status:404
    • That way, you can keep your existing app on 8080 and just run WebRTC signaling on another port.


  1. Use Nginx locally as a multiplexer

    • If you want to keep everything on port 8080 internally, you can put Nginx in front:

      • Nginx listens on 8080.

      • Proxies /signal to localhost:9000.

      • Proxies / or /api to your other app.

    • Then Cloudflare Tunnel points to Nginx only.

    • This way, multiple apps coexist on the same port.

    Example:


    server { listen 8080; location /signal/ { proxy_pass http://127.0.0.1:9000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; } location / { proxy_pass http://127.0.0.1:3000; # your other app } }

If this signaling server is dedicated, Option 1 (different port + Cloudflare Tunnel mapping) is the cleanest and least error-prone. You don’t even need to touch your existing app or Nginx config.




1. Example setup

  • React frontend → app.example.com → served by Nginx on localhost:3000

  • PHP API → api.example.com → running on localhost:8080

  • WebRTC signaling server (Node.js WebSocket) → signal.example.com → running on localhost:9000


2. Cloudflare Tunnel config file (/etc/cloudflared/config.yml)


tunnel: my-tunnel-id credentials-file: /root/.cloudflared/my-tunnel.json ingress: # React frontend - hostname: app.example.com service: http://localhost:3000 # PHP API - hostname: api.example.com service: http://localhost:8080 # WebRTC signaling server - hostname: signal.example.com service: ws://localhost:9000 # Default (catch-all) - service: http_status:404

🔑 Notes:

  • http:// is for normal HTTP traffic.

  • ws:// is for raw WebSockets (Node signaling server).

  • http_status:404 ensures unmatched requests don’t hang.


3. Start the tunnel

If your tunnel is already created:


cloudflared tunnel run my-tunnel-id

If not, create one first:


cloudflared tunnel create my-tunnel cloudflared tunnel route dns my-tunnel app.example.com cloudflared tunnel route dns my-tunnel api.example.com cloudflared tunnel route dns my-tunnel signal.example.com

4. DNS setup in Cloudflare

  • Each hostname (app.example.com, api.example.com, signal.example.com) should point to the tunnel.

  • Cloudflare takes care of SSL certificates for each.


5. Verify

  • Open https://app.example.com → should serve your React app.

  • Open https://api.example.com → should hit your PHP backend.

  • Open wss://signal.example.com from your React client → should connect to your Node WebSocket server.


👉 This way you don’t worry about port clashes at all — every local service can run on its own port, and Cloudflare maps them to different hostnames.


1. Create the systemd unit file

Create a new file:


sudo nano /etc/systemd/system/cloudflared.service

Paste this:


[Unit] Description=Cloudflare Tunnel After=network.target [Service] # Adjust the path if cloudflared is elsewhere ExecStart=/usr/bin/cloudflared tunnel run my-tunnel-id Restart=always RestartSec=5s User=cloudflared WorkingDirectory=/etc/cloudflared Environment=LOGFILE=/var/log/cloudflared.log # Optional logging to syslog StandardOutput=syslog StandardError=syslog SyslogIdentifier=cloudflared [Install] WantedBy=multi-user.target

🔑 Replace my-tunnel-id with the name or UUID of your tunnel (you can see it in cloudflared tunnel list).
You might also need to change /usr/bin/cloudflared if it’s installed elsewhere (which cloudflared will tell you).


2. Create a dedicated user (optional, but safer)


sudo useradd -r -s /usr/sbin/nologin cloudflared sudo chown -R cloudflared:cloudflared /etc/cloudflared

3. Enable and start


sudo systemctl daemon-reload sudo systemctl enable cloudflared sudo systemctl start cloudflared

4. Check status & logs


systemctl status cloudflared journalctl -u cloudflared -f

5. Verify it’s working

  • Visit your exposed hostnames (app.example.com, api.example.com, signal.example.com).

  • Check Cloudflare dashboard → Zero Trust → Tunnels to confirm connections.


✅ With this, your tunnel will:

  • Start on boot.

  • Auto-restart if it crashes.

  • Log to syslog (or the custom log file).



1. Create the service file

Save as:


sudo nano /etc/systemd/system/signaling.service

Example unit:


[Unit] Description=Node.js WebRTC Signaling Server After=network.target [Service] ExecStart=/usr/bin/node /var/www/signaling/server.js Restart=always RestartSec=5s User=www-data WorkingDirectory=/var/www/signaling Environment=NODE_ENV=production # Optional logging StandardOutput=syslog StandardError=syslog SyslogIdentifier=signaling [Install] WantedBy=multi-user.target

2. Adjust paths and user

  • /usr/bin/node → run which node to confirm the correct path.

  • /var/www/signaling/server.js → change to your actual signaling server location.

  • User=www-data → pick the same user that owns your app files (you can also create a dedicated user like signaling).


3. Enable and start


sudo systemctl daemon-reload sudo systemctl enable signaling sudo systemctl start signaling

4. Check logs


systemctl status signaling journalctl -u signaling -f

5. Auto-restart on crash

The Restart=always + RestartSec=5s ensures that if your Node app crashes, systemd will bring it back up in 5 seconds.


✅ At this point:

  • cloudflared.service handles your Cloudflare Tunnel.

  • signaling.service keeps your Node.js WebSocket server alive.

  • Both start automatically on boot and are monitored by systemd.



🔧 Target Setup

  • Frontend React app → served by Nginx → app.example.com

  • PHP backend (API) → PHP-FPM via Nginx → api.example.com

  • Node.js WebRTC signaling server → runs separately → signal.example.com

  • Cloudflare Tunnel → maps each subdomain to the right local service


1. PHP-FPM + Nginx config (API + frontend)

Example Nginx site config (e.g. /etc/nginx/sites-available/example.conf):


server { listen 3000; server_name app.example.com; root /var/www/react-app/build; index index.html; location / { try_files $uri /index.html; } } server { listen 8080; server_name api.example.com; root /var/www/api/public; index index.php; location / { try_files $uri /index.php; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php/php8.2-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

Here:

  • React build served on localhost:3000

  • PHP-FPM API served on localhost:8080


2. Node.js signaling server

Say you have /var/www/signaling/server.js:


import { WebSocketServer } from 'ws'; const wss = new WebSocketServer({ port: 9000 }); console.log("Signaling server running on ws://localhost:9000"); wss.on('connection', (ws) => { ws.on('message', (msg) => { const data = JSON.parse(msg); console.log("Received:", data); // Simple echo or broadcast logic here... }); });

Runs on localhost:9000.


3. Cloudflare Tunnel config (/etc/cloudflared/config.yml)


tunnel: my-tunnel-id credentials-file: /root/.cloudflared/my-tunnel.json ingress: - hostname: app.example.com service: http://localhost:3000 - hostname: api.example.com service: http://localhost:8080 - hostname: signal.example.com service: ws://localhost:9000 - service: http_status:404

4. Services managed by systemd

Cloudflared

/etc/systemd/system/cloudflared.service


[Unit] Description=Cloudflare Tunnel After=network.target [Service] ExecStart=/usr/bin/cloudflared tunnel run my-tunnel-id Restart=always RestartSec=5s User=cloudflared WorkingDirectory=/etc/cloudflared [Install] WantedBy=multi-user.target

Node signaling server

/etc/systemd/system/signaling.service


[Unit] Description=Node.js WebRTC Signaling Server After=network.target [Service] ExecStart=/usr/bin/node /var/www/signaling/server.js Restart=always RestartSec=5s User=www-data WorkingDirectory=/var/www/signaling Environment=NODE_ENV=production [Install] WantedBy=multi-user.target

5. Enable & start everything


sudo systemctl daemon-reload sudo systemctl enable cloudflared signaling nginx php8.2-fpm sudo systemctl start cloudflared signaling nginx php8.2-fpm

6. Result

  • https://app.example.com → React app via Nginx

  • https://api.example.com → PHP API via Nginx + PHP-FPM

  • wss://signal.example.com → WebRTC signaling via Node.js

  • Cloudflare Tunnel maps them securely, with SSL managed by Cloudflare


✅ With this, you have an all-in-one blueprint: PHP + Nginx + Node + Cloudflare Tunnel living together.
You never need to expose raw ports — just map each subdomain to the right service in your config.yml.



🟢 Enhanced Node.js Signaling + Messaging Server


// server.js import { WebSocketServer } from "ws"; const wss = new WebSocketServer({ port: 9000 }); const rooms = new Map(); function broadcast(room, sender, message) { const clients = rooms.get(room) || new Set(); for (let client of clients) { if (client !== sender && client.readyState === client.OPEN) { client.send(JSON.stringify(message)); } } } wss.on("connection", (ws) => { ws.on("message", (raw) => { let data; try { data = JSON.parse(raw); } catch { return; } const { type, room, payload } = data; if (type === "join") { if (!rooms.has(room)) rooms.set(room, new Set()); rooms.get(room).add(ws); ws.room = room; console.log(`Client joined room ${room}`); return; } if (!ws.room) return; switch (type) { case "offer": case "answer": case "ice": // WebRTC signaling broadcast(ws.room, ws, { type, payload }); break; case "chat": // Chat messages broadcast(ws.room, ws, { type: "chat", from: ws._id || "anonymous", payload, }); break; case "file-chunk": // File transfer (chunked) // payload = { filename, chunk, seq, done } broadcast(ws.room, ws, { type: "file-chunk", from: ws._id || "anonymous", payload, }); break; } }); ws.on("close", () => { if (ws.room && rooms.has(ws.room)) { rooms.get(ws.room).delete(ws); if (rooms.get(ws.room).size === 0) { rooms.delete(ws.room); } } }); }); console.log("Signaling server running on ws://localhost:9000");

🟡 React Client Example

Connecting & joining


const ws = new WebSocket("wss://signal.example.com"); ws.onopen = () => { ws.send(JSON.stringify({ type: "join", room: "chatroom1" })); };

Sending chat


function sendChat(msg) { ws.send(JSON.stringify({ type: "chat", room: "chatroom1", payload: msg })); }

File transfer (chunked)


function sendFile(file) { const chunkSize = 16 * 1024; // 16 KB let offset = 0; const reader = new FileReader(); reader.onload = (e) => { const chunk = e.target.result; const done = offset + chunk.byteLength >= file.size; ws.send(JSON.stringify({ type: "file-chunk", room: "chatroom1", payload: { filename: file.name, chunk: Array.from(new Uint8Array(chunk)), seq: offset, done } })); offset += chunk.byteLength; if (!done) readNext(); }; function readNext() { const slice = file.slice(offset, offset + chunkSize); reader.readAsArrayBuffer(slice); } readNext(); }

Receiving chat / files


ws.onmessage = (event) => { const data = JSON.parse(event.data); if (data.type === "chat") { console.log("Chat:", data.from, data.payload); } if (data.type === "file-chunk") { // handle file reconstruction (buffer chunks until done) console.log("Received file chunk", data.payload.filename, data.payload.seq); } };

🔑 Key Points

  • WebRTC first → Use signaling (offer, answer, ice) to establish P2P.

  • Fallback to WebSocket → If NAT/firewall blocks P2P, chat and files still work over the signaling server.

  • Chunked files → Keeps memory usage reasonable; reassemble chunks client-side.


✅ This way your Node server is:

  • Lightweight signaling hub

  • Backup chat and file transfer channel



🟡 File Receiving & Reassembly in React

State for tracking incoming files


import { useRef } from "react"; const incomingFiles = useRef({}); // { filename: { chunks: [], received: 0, done: false } }

Handle incoming file chunks


ws.onmessage = (event) => { const data = JSON.parse(event.data); if (data.type === "file-chunk") { const { filename, chunk, seq, done } = data.payload; if (!incomingFiles.current[filename]) { incomingFiles.current[filename] = { chunks: [], received: 0, done: false }; } const fileData = incomingFiles.current[filename]; fileData.chunks.push({ seq, chunk }); fileData.received += chunk.length; if (done) fileData.done = true; // If file is complete, assemble if (fileData.done) { assembleFile(filename); } } };

Assemble and download file


function assembleFile(filename) { const fileData = incomingFiles.current[filename]; // Sort chunks by seq (in case they arrive out of order) fileData.chunks.sort((a, b) => a.seq - b.seq); // Convert chunk arrays back into Uint8Array const buffers = fileData.chunks.map(c => new Uint8Array(c.chunk)); // Merge all buffers const totalLength = buffers.reduce((sum, b) => sum + b.length, 0); const merged = new Uint8Array(totalLength); let offset = 0; for (const b of buffers) { merged.set(b, offset); offset += b.length; } // Create a downloadable blob const blob = new Blob([merged]); const url = URL.createObjectURL(blob); // Trigger download const a = document.createElement("a"); a.href = url; a.download = filename; a.click(); // Cleanup URL.revokeObjectURL(url); delete incomingFiles.current[filename]; }

🔑 How It Works

  1. Each chunk arrives as { filename, chunk, seq, done }.

  2. We collect them in incomingFiles.

  3. Once done = true, we sort chunks and rebuild into a single Uint8Array.

  4. Create a Blob and trigger a browser download.


✅ Now your flow is:

  • Send file → split into chunks, send via WebSocket.

  • Receive file → collect chunks, assemble when complete, prompt download.

  • Works whether WebRTC P2P succeeds or not.