React File Picker

We start with functional requirements:

  • a React control that allows user to select multiple files for uploading to a server
  • user should be able to order the files (i.e., the order of files matters)
  • user should be able to delete (remove) files from the selection

Bonus:

  • the application can set a limit on the number of files user can select for uploading
  • the application can set a limit on the maximum individual file size. user cannot select file more than max file size
  • the application can predefine the file types user can select (e.g., only png and jpg images)

This seems like a common enough scenario but surprisingly there is no generic control in HTML with above features. HTML comes with <input type="file"> element which we will use as the foundation of the control we are developing. Familiarize yourself with this HTML element before proceeding ahead.

Ready? Let’s get started.

Step 0: Install Node.js and CRA (create-react-scripts)

Step 1: Run npm init to create a new Node.js project

A note: even though we are creating a Node.js project, the React control we are developing is going to run on a browser. Remember Node.js is server side technology. Your browser (Chrome) does not run Node.js. It provides a JavaScript execution environment but that is not Node.js. The distinction is important to remember.

Step 2: The React code

If you are new to React, first understand the difference between classic React components (legacy) and the newer functional components (recommended). If you browse code on the web, you are likely to come across both. The two use different programming styles, so it is easy to get confused for a newbie. Also the functional components are NOT stateless. They are called functional because they don’t use classes. We will create a functional component.

The heart of the component will be a list object which is nothing but the list of files that the user has selected. The order of items matters in this list. This list is used to render a HTML table which displays metadata about a file (such as filename, size file type etc.) together with buttons to reorder items in the list or remove them. A picture is worth a thousand words so see the example below:

React will react to changes in this list and automatically update the HTML table without us having to do anything. That’s why its called React btw – because it reacts to changes in variables. These variables are known as observables and reminds me of an old library KnockoutJS that I used once long before React appeared. The pattern is known as the Observer pattern in programming.

Here is the skeleton of our React component:

import React, { useState } from "react";

const FileInput = ({ onChange, maxFileSize, accept, maxFileCount }) => {
  const [list, setList] = useState([]);

  const handleUp = (e, i) => {
    // handle up button and re-order list accordingly
  };

  const handleDown = (e, i) => {
    // handle down button and reorder list accordingly
  };

  const handleDelete = (e, i) => {
     // remove item from the list    
  };

  const validate = (file) => {
    // validate that file does not exceed predefined maxFileSize
  };

  const renderHtmlTable = () => {
    // render the list as an HTML table
  };

  const renderFileInput = () => {
    // render <input type="file"> HTML element which allows user to add items to the list
  };

  return (
    <>
      {renderHtmlTable()}
      {renderFileInput()}
    </>
  );
}

export default FileInput;

Completing the methods is left as exercise for the reader. Explanation of the arguments:

  • onChange: an event handler that is called whenever the file list changes (addition, deletion or change in order of items in the list)
  • maxFileSize: a number. units: bytes. user is not allowed to select a file whose size is greater than maxFileSize.
  • accept: same as accept. a string that defines the file types the file input should accept.
  • maxFileCount: used to limit the number of files user can select.

The useState method is how functional components access state in React. Familiarize yourself with it in case you don’t know about it.

The component requires react and react-dom dependencies to function. These dependencies should be declared under peerDependencies section in package.json to avoid having two copies of React in the final application which gives a runtime error if that happens.

Step 3: Building and packaging the code

JSX code has to be compiled to JavaScript for browsers to understand. For this we use babel. The component in this case is small (just one file) so we don’t need any fancy packaging but for bigger projects we can use webpack or rollup. webpack can call babel for you as shown below (this code will go in webpack.config.js):

module.exports = {
    entry: "./src/FileInput.js",
    output: {
      path: path.resolve(__dirname, 'dist'),
      filename: "main.js",
      library: pkg.name,
      libraryTarget: "umd",
      umdNamedDefine: true
    },
    module: {
      rules: [
        {
          test: /\.(js|jsx)$/,
          exclude: /node_modules/,
          use: {
            loader: "babel-loader"
          }
        },

The babel and webpack dependencies should go under devDependencies in package.json since you only need them in the build process. The built code (i.e., the code that you release) does not need them.

If you are writing in TypeScript then just add

"jsx": "react-jsx"

under the compilerOptions section of your tsconfig.json. The TS compiler will take care of compiling JSX to JS.

Step 4: Testing the code

To test the module before publishing to npm, pack it as a tarball:

$ npm pack

This will pack all the files under files in package.json and create a tarball (.tgz file). Then copy this tarball to your test project and install it by running:

$ npm i siddjain-react-bootstrap-file-input-1.0.0.tgz

Some online docs suggest using npm link to test a package locally before publishing to npm. In my experience, do NOT use npm link otherwise you run into invalid hook call.

Posted in Computers, Software | Tagged , | Leave a comment

How to scale an image – a comparison of different algorithms

Scaling (or resizing) an image (making it bigger or smaller) is so common these days that we take it for granted but have you ever wondered how it is done? An image is made up of M x N pixels. When we scale it, we create a new image of size W x H pixels. How should the pixels in the new image be filled in?

Downsampling (scaling down)

When we make the image smaller, we are downsampling it. There are at least 3 methods that come to mind. Its tax season so I have named them basic, deluxe and premium after the 3 versions of TurboTax:

  • Basic: Simply take every k-th pixel from the original image where k is the factor by which we are downsampling. E.g., if k = 2 choose every alternate pixel from the original image. Requires k to be integer. Mathematically y[n] = x[kn] where x is input, y is output.
  • Deluxe: Average consecutive blocks of k x k pixels in original image. E.g., if k = 2, average blocks of 2 x 2 pixels to generate the scaled image. Requires k to be integer.
  • Premium: An image can be thought of as a 2D signal and when we scale (or resize) the image we are essentially resampling the signal. One way to downsample a signal is as follows: take the DFT of the signal and simply crop it to the new size. E.g., given a 512 x 512 image and k = 2, take the DFT of the image and only keep the 256 x 256 portion corresponding to the lower-half of the frequencies (0 to π/k in general); discard the rest. Now take the inverse DFT and you should have the downsampled image. To see why it works refer to a good signal processing textbook. In fact, from a theoretical standpoint, this is the best way to downsample an image.

Let’s see how these 3 methods do on a test image:

Original 512×512 image

Results of downsampling by a factor of 2 using the 3 methods (implementation of the methods is left as exercise for the reader):

Basic Downsample (11 ms)Deluxe Downsample (23 ms)Premium Downsample (72 ms)
Results of downsampling by a factor of 2 using the 3 methods

There is not much difference to be seen by the naked eye. However, consider an image of alternating black and white pixels. What would Basic Downsample by 2x give on this image? compare with Deluxe? What happens if we shift the image by 1 pixel? What happens to the output of the Basic downsampler? Is it shift invariant – meaning does it also shift by 1 pixel?

Upsampling (scaling up an image)

When we make the image bigger, we are upsampling it. Below are the 3 equivalent upsamplers:

  • Basic: Simply repeat every pixel in the original image k x k times where k is the factor by which we are upsampling. E.g., if k = 2, repeat each pixel twice (along each dimension) to get the image with 2x the size. Requires k to be integer. This is also known as nearest neighbor interpolation.
  • Deluxe: If you had a 1D signal (imagine a sine curve in your mind or any other curve) and was asked to upsample it, you likely wouldn’t repeat values. Instead you would simply do a linear interpolation to fill in the missing values. We can do the same thing with an image (2D signal). Instead of linear interpolation, we do a bilinear interpolation. Requires k to be integer.
  • Premium: From signal processing theory, one way to upsample a signal is as follows: take the DFT of the signal and simply pad it with zeros to the new size. E.g., given a 100 x 100 image and k = 2, take the DFT of the image and pad it with zeros to make it 200 x 200. The DFT of the original signal fills in the portion from 0 to π/k of the new DFT. Now take the inverse DFT and you should have the upsampled image. Again, to see why it works refer to a good signal processing textbook. In fact, from a theoretical standpoint, this is the best way to upsample an image.

Let’s see how the 3 methods do on a test case of 100 x 100 image and k = 2.

Original 100×100 image for upsampling
Basic Upsample (75 ms)Deluxe Upsample (81 ms)Premium Upsample (11 ms)
Results of upsampling by a factor of 2 using the 3 methods

Scaling by fractional amount

To scale an image by a fractional amount e.g., k = 1.33 we can first upsample it by 4 using any of the 3 upsamplers above and then downsample it by 3. This will give 9 different results depending upon which combinations you choose. It is possible to combine the concepts of deluxe upsampler/downsampler into a single function that can scale an image by an arbitrary amount (k need not be integer or even rational number). In fact we may scale the image differently along different dimensions distorting its aspect ratio. Below is example using GDI+:

public static Bitmap scale_dot_net(Bitmap bmp, float sx, float sy)
        {
            int sourceWidth = bmp.Width;
            int sourceHeight = bmp.Height;
            int destWidth = (int)(sourceWidth * sx);
            int destHeight = (int)(sourceHeight * sy);

            Bitmap bmPhoto = new Bitmap(destWidth, destHeight, PixelFormat.Format24bppRgb);
            bmPhoto.SetResolution(bmp.HorizontalResolution, bmp.VerticalResolution);
            using (Graphics grPhoto = Graphics.FromImage(bmPhoto)) {
                grPhoto.InterpolationMode = InterpolationMode.HighQualityBicubic;
                grPhoto.DrawImage(bmp,
                    new Rectangle(0, 0, destWidth, destHeight),
                    new Rectangle(0, 0, sourceWidth, sourceHeight),
                    GraphicsUnit.Pixel);
            }            
            
            return bmPhoto;
        }

In above GDI+ is doing all the hard work to scale the image and the exact internals are unknown although InterpolationMode.HighQualityBicubic gives a clue. This code took 70 ms to downsample and 98 ms to upsample.

My purpose here was to try some methods where I knew exactly what they were doing. I was really interested in seeing the results of the Fourier (Premium) method. For the given test image its hard to see the difference between the various methods. I will continue the post with more material.

Posted in Computers | Tagged | Leave a comment

How to fix tiny display in Mac

TL;DR:

  • Use the Scaled option under System Preferences -> Display if your display is too tiny or too big. Don’t be deterred by the warning: Using a scaled resolution may affect performance.
  • There is no need to buy a 4K monitor if you will be running it at QHD resolution. This is my opinion and I haven’t done real-world test.
  • For a 27-29″ display recommend using QHD resolution (~110PPI). 4K is sure to make the text too tiny to read and will strain your eyes.

In my case it happened because even though I had a 28″ display (Philips 28 E line), Mac OS (Big Sur specifically on my Macbook Pro) was detecting it as a 61″ display and so naturally everything was 2x smaller than what it ought to be. First I tried to make everything bigger by changing the preferences of individual apps but this is not a good solution. The solution is to go to System Preferences -> Display and choose Scaled instead of the recommended default for display setting. The scaled setting allows you to select a non-native resolution. The downside of selecting a non-native resolution is that the GPU has to do more work and if not done properly, scaling can lead to a blurry display but its the best workaround for this problem (Mac issues are hard to fix). In case of my Philips 28″ display, scaling it to 2560×1440 resolution did not lead to any blurring or performance issues.


Mac OS frequently detects wrong display size of external monitors. For example, see below threads:

Unfortunately there is no solution to be found since its an OS problem. When this happens everything on the screen will appear much smaller than it should (most commonly the OS detects a bigger screen size that what it actually is). Now if you are a “principled” person who likes to do everything “by the book”, the gut instinct is not to choose the Scaled display setting since a warning is displayed: Using a scaled resolution may affect performance. This is the path I chose initially and I started mucking around with text size settings in various apps to make the text larger.

For VS Code:

For Notes:

For Outlook:

Its a pain to tweak the settings of every app and even then there are some things you just can’t change like the size of the nav bar in Google chrome or adjust size in Webex. So don’t do this.

Instead just choose scaled display setting to make everything uniformly bigger on the screen. The trick here is that choosing the scaled setting frequently leads to a blurry (foggy) display so you have to be careful and choose a scaling that does not lead to blurry display. In my case I chose a 2560 x 1440 resolution and it worked well. The native resolution was 4K (3840 x 2160).

Pro tip: Open System Preferences from the  Apple menu in Mac OS X. Click on “Display” Under the ‘Display’ tab, hold down the OPTION / ALT key while you press on the ‘Scaled’ button alongside Resolution to reveal all available screen resolution options for the display.

I also verified that when I used a monitor whose display size was correctly detected, then the native 4K (3840 x 2160) resolution gave desired display size on the screen.

Below is a Philips 28″ display that is incorrectly detected as a 61″ display and is the cause of the problem (everything appears 2x smaller than what it should):

In case of a LG monitor, the OS detected the display size correctly:

Never had this problem with Windows btw!


How to fix large display in Mac?

UPDATE: It actually doesn’t here. With my LG display I couldn’t help but feel that everything was too big now – its the reverse of the problem with my Philips 28″. I have default for display selected which should run the monitor at its native resolution:

and the native resolution of the monitor is 4K, but lo and behold what do I get when I run this:

 % osascript -e 'tell application "Finder" to get bounds of window of desktop'
0, 0, 1920, 1080

and this confirms the feeling that everything really is too big now. screenresolution gives same result:

% screenresolution get
2022-03-18 12:45:04.590 screenresolution[4202:24991] starting screenresolution argv=screenresolution get
2022-03-18 12:45:04.604 screenresolution[4202:24991] Display 0: 1920x1080x32@30

What can I say, with a Mac you just can’t win! you have to give up! I went from a tiny display to a large display. I tried setting the display to 4K using screenresolution program but it made everything tiny and setting it to 2560×1440 led to blurring. Good luck Mac users getting your display to work!


Still want to keep reading? Re: this post – don’t read too much into it. First, when he recommends 90-110 PPI resolution (which I agree with), understand what it means: it means the PPI calculated using the physical (actual) # of pixels in the display. For a 4K monitor with 27″ diagonal and 16:9 aspect ratio this will be 163 PPI and is constant – it does not change depending on what resolution you are running the display. We can thus see its an overkill to buy a 4K monitor with 27″ display.

Note the part where he says: “I tried all three of the modes, and from a sharpness point of view, I was quite happy with all of them. However, the sharpness advantage of the pixel doubled mode is definitely perceivable.” The pixel doubled mode is 1920×1080. He is running 4K display at half of the resolution and there’s actually nothing wrong with it – he admits it gives him the best looking text. The PPI of this display still remains 163 when we use the actual physical # of pixels in the calculation.

Then he claims: “However, 27 Inch 4K used at scaled 2560×1440 will be a lot sharper than 27 Inch QHD monitor when used with MacOS.” I doubt it. If that’s the case, then why do people complain about blurry display when using scaled? He says “For example, in order to display 2560×1440 scaled resolution on a 4K display, MacOs first renders a 5120×2880 canvas, then downscales it to 4K.” I am not sure. I would guess more accurately the applications render a 2560×1440 bitmap which is then upsampled by the graphics card to 3840×2160 when it is sent to the monitor. But then its MacOS so we never know…


Finally I got the LG monitor to work at 2560×1440 without blurring.

from this page: The ideal size for a monitor mainly depends on its resolution and how far you’re sitting from the screen. Overall, most people find that 1920×1080 shouldn’t be used on anything larger than 25-inch; 1440p is ideal for 27-inch,

Posted in Computers | Tagged | Leave a comment

CloudFlare 522 once again!

TL;DR: In a Dockerized setup this can happen if the Docker port forwarding stops working as it did in our case (all of a sudden). That caused the ELB (elastic load balancer) health probe to fail and the load balancer stopped sending traffic to the machine resulting in 522 error (unresponsive application). docker info showed that IPv4 forwarding was disabled (an uncommon scenario) and running sysctl -w net.ipv4.ip_forward=1 (requires root privileges) fixed the problem. Rest of this post contains the steps that shows how I diagnosed the problem.


Ran into CloudFlare 522 error with a website that was working fine before. First I checked docker container was running (use docker ps). The ELB (elastic load balancer) was not able to connect to backend pool and said:

Some of your load balancing endpoints may be unavailable. Please see the
metrics blade for availability information and troubleshooting steps for
recommended solutions.

The metrics blade showed health probe stopped responding all of a sudden:

Docker logs showed no more incoming requests after health probe stopped working. But I was able to make request to the health probe:

$ curl -k https://XX.YY.ZZZ.AA/health-check
OK

Turns out I was running the command from the VM that was hosting the Docker container. When I ran the same command from another VM it failed:

$ curl -k https://X.Y.Z.A/health-check
curl: (7) Failed to connect to X.Y.Z.A port 443: Connection refused

The other VM was able to ping the VM running the docker container:

$ ping X.Y.Z.A
PING X.Y.Z.A (X.Y.Z.A): 56 data bytes
64 bytes from X.Y.Z.A: icmp_seq=0 ttl=55 time=86.271 ms
64 bytes from X.Y.Z.A: icmp_seq=1 ttl=55 time=86.213 ms
64 bytes from X.Y.Z.A: icmp_seq=2 ttl=55 time=86.876 ms
64 bytes from X.Y.Z.A: icmp_seq=3 ttl=55 time=86.002 ms
64 bytes from X.Y.Z.A: icmp_seq=4 ttl=55 time=85.977 ms
64 bytes from X.Y.Z.A: icmp_seq=5 ttl=55 time=85.636 ms
64 bytes from X.Y.Z.A: icmp_seq=6 ttl=55 time=85.822 ms
^C
--- X.Y.Z.A ping statistics ---
7 packets transmitted, 7 packets received, 0.0% packet loss

This shows a machine can connect to the VM. But can it connect to the port on the VM where the health probe is listening?

$ nc -zv X.Y.Z.A 443
nc: connectx to X.Y.Z.A port 443 (tcp) failed: Operation timed out

What about other ports? Run netstat -tpln to see a list of all ports in use:

$ netstat -tpln
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -
tcp        0      0 X.Y.Z.A:17472      0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:17473         0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:199           0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:29131         0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -
tcp6       0      0 :::443                  :::*                    LISTEN      -
tcp6       0      0 :::2377                 :::*                    LISTEN      -
tcp6       0      0 :::7946                 :::*                    LISTEN      -
tcp6       0      0 :::80                   :::*                    LISTEN      -
tcp6       0      0 :::4118                 :::*                    LISTEN      -

Note that 443 and 80 (the ports where health probes are running) are bound to 0.0.0.0 (::: is alias for 0.0.0.0) so this is not the problem. Now try connecting to other ports:

$ nc -zv X.Y.Z.A 2377
Connection to X.Y.Z.A port 2377 [tcp/*] succeeded!
$ nc -zv X.Y.Z.A 7946
Connection to X.Y.Z.A port 7946 [tcp/*] succeeded!
$ nc -zv X.Y.Z.A 4118
Connection to X.Y.Z.A port 4118 [tcp/netscript] succeeded!

The fact that other ports are reachable indicates the problem is with Docker. Run ps aux | grep docker-proxy:

$ ps aux | grep docker-proxy
root      56498  0.0  0.0 190896  2640 ?        Sl   12:54   0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.18.0.4 -container-port 443
root      56511  0.0  0.0 117164  2636 ?        Sl   12:54   0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.18.0.4 -container-port 80

This shows port forwarding has been setup correctly (The docker ps command also shows if port forwarding has been setup). Again note we are using 0.0.0.0 so this is not the problem. docker inspect did not show us anything abnormal and I was just about to post a question on Docker forums asking for help when I ran docker info and what do we see?

WARNING: IPv4 forwarding is disabled
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

I enabled IPv4 forwarding by running (requires root privileges):

$ sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

and the application was up again! The health probe starting working, the load balancer started forwarding traffic and CloudFlare 522 error went away! I did not have to enable bridge-nf-call-iptables (maybe because I was running a swarm network?) so didn’t mess with it.

The ability to debug is the most important skill to have in a developer.

Posted in Software | Tagged | Leave a comment

Bash Quirks: understanding behavior of set -e

There is one bash quirk I just learnt today and that caused me fair amount of debugging. In summary when you use the set -e option in bash, any function that returns a non-zero exit code will cause your script to terminate except that this does not apply when the function is inside an if statement – more accurately when the function forms the argument of an if clause. We can see this behavior in action below:

#!/bin/bash
set -e

function func {
  if [ "$1" = "$2" ] ; then
    return 0
  else
    return 1
  fi
}

VAR1=foo
VAR2=bar

if func $VAR1 $VAR2 ; then
   echo "func returned 0"
else
   echo "func returned 1. set -e does not cause script to exit."
fi

SOME_VAR=$(func $VAR1 $VAR1)

echo "SOME_VAR = $SOME_VAR. the function returns 0 so execution continues till here."

SOME_OTHER_VAR=$(func $VAR1 $VAR2)

echo "we never get here and program exits beforehand since function returns 1 and set -e causes script to exit."

Here is output of running this script:

$ ./test.sh
func returned 1
SOME_VAR = . the function returns 0 so execution continues till here.

$ echo $?
1
Posted in Software | Tagged | Leave a comment

Berkeley 2021

Views from Lawrence Hall of Science

Iskcon Temple – Berkeley

Point Reyes

A tip if you are visiting San Francisco and renting a car: Bay Area has many toll bridges. You have two options – either buy the toll coverage from rental car agency (which will cost you more – you pay a flat amount per day) or you can create an account at bayareafastrak.org and register the rental car using its license plate number for the duration of your rental. Then you only get charged when you cross the bridges (pay as you go) and pay no extra charges.

Posted in Travel | Leave a comment

How to check in Bash script if a Docker volume does not exist

Sometimes you may want to check if a Docker volume exists or not in a Bash script:

if volumeExists $1; then
    echo "volume exists"
else
    echo "volume does not exist"
fi

To check existence of a Docker volume we can use the docker volume ls command with the name option but there are several caveats:

  • it does not do an exact match and returns results that match on all or part of a volume’s name
  • The output contains two columns
$ docker volume ls
DRIVER    VOLUME NAME
local     0cbe8bb612d6d4c5727d1eb52b5a7a682088ab744181f7da9c1341e9fae83034

To just extract the last column we can use awk {print $NF} command:

$ docker volume ls | awk '{print $NF}'
NAME
0cbe8bb612d6d4c5727d1eb52b5a7a682088ab744181f7da9c1341e9fae83034

And to do an exact match we can further use grep -E option. The complete function looks like:

function volumeExists {
  if [ "$(docker volume ls -f name=$1 | awk '{print $NF}' | grep -E '^'$1'$')" ]; then
    return 0
  else
    return 1
  fi
}

Remember in Bash 0 means success or true inside an if statement. Tip: do not use grep -w as it will match mysql-test when you want to search for mysql

Posted in Software | Tagged , | Leave a comment

Access Denied for User ‘root’@’localhost’ (using password: YES)

In this post I describe a lesser known reason you may run into Access Denied for User ‘root’@’localhost’ (using password: YES) error when trying to login to MySQL. The setup is as follows: you are running MySQL inside a Docker container and using volumes. The startup script may look like:

docker container create \
	--name benny \
	--network $NETWORK \
	--volume nina:/var/lib/mysql \
	--log-opt max-file=3 \
 	--log-opt max-size=3m \
	--workdir /home \
	--env MYSQL_ROOT_PASSWORD=abracadabra \
	--env MYSQL_DATABASE=wordpress \
	--env MYSQL_USER=wpuser \
	--env MYSQL_PASSWORD=XJJm8f \
	--env TZ=UTC \
  mysql:8.0.24 \
  mysqld --default-authentication-plugin=mysql_native_password

Before running the command above, we first create the Docker volume:

docker volume create nina

Then we run the docker container create command. Verify that container starts and that you are able to login using the passwords above.

$ docker exec -it benny /bin/bash
root@95544c832e2b:/home# mysql -u root -p
Enter password: <enter password from above>

Now do following:

  • Stop and remove the container (but DO NOT remove the Docker volume)
  • Provision a new container but giving it new passwords (why would you do that? maybe you have a script that autogenerates a new password every time its run). E.g.:
docker container create \
	--name benny \
	--network $NETWORK \
	--volume nina:/var/lib/mysql \
	--log-opt max-file=3 \
 	--log-opt max-size=3m \
	--workdir /home \
	--env MYSQL_ROOT_PASSWORD=alibaba \
	--env MYSQL_DATABASE=wordpress \
	--env MYSQL_USER=wpuser \
	--env MYSQL_PASSWORD=t42rSC \
	--env TZ=UTC \
  mysql:8.0.24 \
  mysqld --default-authentication-plugin=mysql_native_password

Now if you try to login with the new passwords, it won’t work and you get the Access Denied error. So the new passwords (environment variables) DO NOT OVERRIDE the original passwords stored in the Docker volume and MySQL expects you to use those passwords. The lesson here is to store those passwords safely somewhere. If you lose those passwords, then you will lose access to MySQL.

From the man page:

Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.

https://hub.docker.com/_/mysql
Posted in Software | Tagged , | Leave a comment

ffmpeg – convert mov files to mp4

Ffmpeg -I mov-file -b <bit-rate> mp4-file

Calculate bit rate as desired file size in bits / duration of video in seconds

e.g., for 100MB file and 20 min video, bit rate should be 100e6*8/(20*60)

Us -b:v to set bitrate for video and -b:a to set bitrate for audio

$ ffmpeg -i foo.mov -b:v 800k -b:a 128k foo.mp4

References:

Posted in Computers | Tagged | Leave a comment

Bash script to upgrade a Docker container

Problem:

We have an application that has a single instance and runs in a Docker container and further uses a swarm network. We want to write a bash script using which we can deploy new version of the application. The deployment should be fault-tolerant meaning if anything goes wrong, we should not lose the previous version of the application.

Solution:

You can try using docker service command to solve the problem but here I am going to show how to DIY. The simplified steps are like this (details and caveats will follow):

  1. Build the image that will be used to instantiate new container.
  2. Disconnect old container from swarm network. Then stop it.
  3. Instantiate new container giving it a temporary name. We assume the container gets attached to the swarm network as part of instantiation.
  4. Rename old container to something temporary.
  5. Disconnect new container from network.
  6. Rename new container to old container.
  7. Re-connect new container to swarm network.
  8. Delete old container.

Caveat #1

Steps #2 and #5 are needed if you are using a swarm network because without it we get an error renaming new container to old container (Step #6).

 Could not add service state for endpoint XXX to cluster on rename: 
 cannot create entry in table endpoint_table with network id tto0055xkicxz0397dln5h06y 
 and key f31072bc004794a4ec5943e4549cd89ad5647047c336fc2c59171c7a4aaef596, already exists

This error does not happen on a bridge network. See man page where it says: **The container must be running to disconnect it from the network.** As with everything Docker, this is a bit counter-intuitive as you normally turn an appliance off before disconnecting it from power.

This also means that we need to check that the old container is running in Step 2 and start it if its not running.

Rollback

We need a rollback script if anything goes wrong. First step to figure out is how do you catch exceptions in Bash? Like everything Bash, there is no good way. The trap function is closest we have.

function rollback {
    echo "FAILED to provision new container. Rolling back deployment"
    if containerExists $NEW_CONTAINER_NAME; then
        docker logs $NEW_CONTAINER_NAME
        if containerIsRunning $NEW_CONTAINER_NAME; then
            docker stop $NEW_CONTAINER_NAME
        fi
        docker rm $NEW_CONTAINER_NAME
    fi
    if containerExists $TMP_CONTAINER_NAME && ! containerIsRunning $TMP_CONTAINER_NAME; then
        # this means we are able to rename the original container but the renaming of the newly provisioned container failed.
        echo "rename $TMP_CONTAINER_NAME to $OLD_CONTAINER_NAME"
        docker rename $TMP_CONTAINER_NAME $OLD_CONTAINER_NAME
    fi
    # resume original container
    if containerExists $OLD_CONTAINER_NAME && ! containerIsRunning $NEW_CONTAINER_NAME; then
        echo "starting original container"
        docker start $OLD_CONTAINER_NAME
        echo "connecting it to the network"
        docker network connect $NETWORK $OLD_CONTAINER_NAME    
    fi
    # return with non-zero exit code to indicate deployment failure
    exit 1
}

Main Script

if containerExists $OLD_CONTAINER_NAME; then
    # trap will catch unhandled exceptions
    # https://stackoverflow.com/a/35800451/147530
    trap 'rollback' ERR    
    if ! containerIsRunning $OLD_CONTAINER_NAME; then
        # this code path can be entered when you are developing locally and you stopped container
        echo "$OLD_CONTAINER_NAME exists but is not running. starting $OLD_CONTAINER_NAME"
        docker start $OLD_CONTAINER_NAME
    fi
    # When using a Docker Swarm we have to explicitly disconnect the container
    # from the network. 
    # Note that if after disconnecting the network is left with no attached containers, it will go away and disappear momentarily and you will get a
    # failed to get network during CreateEndpoint error when you try to attach any container to the network
    # see https://github.com/moby/moby/pull/41011
    # can't believe Docker has so many bugs in it
    echo "disconnecting $OLD_CONTAINER_NAME from $NETWORK"
    docker network disconnect $NETWORK $OLD_CONTAINER_NAME
    # stop the container. we have to stop the old container and cannot avoid a short
    # downtime. If we try to provision new container while old one is still running, we get this error presumably when it tries to publish its port:
    # Error response from daemon: driver failed programming external connectivity on endpoint Bind for 0.0.0.0:443 failed: port is already allocated
    echo "stopping $OLD_CONTAINER_NAME"
    docker stop $OLD_CONTAINER_NAME
    
    echo "starting new container"
    CONTAINER_NAME=$NEW_CONTAINER_NAME ./deploy-container.sh
    # if temp container has been successfully provisioned
    if containerIsRunning $NEW_CONTAINER_NAME; then
        echo "swapping old container with new"
        echo "renaming $OLD_CONTAINER_NAME to $TMP_CONTAINER_NAME"
        docker rename $OLD_CONTAINER_NAME $TMP_CONTAINER_NAME
        # see https://github.com/moby/moby/issues/42351
        # for why we are disconnecting. Without it the rename will fail on an overlay network.
        # Its a bug in Docker and the disconnect is a workaround.
        echo "disconnecting $NEW_CONTAINER_NAME from $NETWORK"
        docker network disconnect $NETWORK $NEW_CONTAINER_NAME
        echo "renaming $NEW_CONTAINER_NAME to $OLD_CONTAINER_NAME"
        docker rename $NEW_CONTAINER_NAME $OLD_CONTAINER_NAME
        echo "reconnecting $NEW_CONTAINER_NAME to $NETWORK"
        docker network connect $NETWORK $OLD_CONTAINER_NAME
        echo "removing $TMP_CONTAINER_NAME"
        docker rm $TMP_CONTAINER_NAME
    else
        rollback
    fi    
else
    # it looks like we are deploying for the first time
    CONTAINER_NAME=$OLD_CONTAINER_NAME ./deploy-container.sh
fi

Further Reading

Posted in Software | Tagged , | Leave a comment