Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating tables in Database docker containers is slow (M1 Preview) #5236

Closed
jamielsharief opened this issue Jan 11, 2021 · 53 comments
Closed
Labels

Comments

@jamielsharief
Copy link

jamielsharief commented Jan 11, 2021

I have noticed that when running a database server MySQL or Maria DB in a docker container on the M1 Technical Preview, it is extremely slow.

Watching a visual inspection of the table creation you can see how slow it is.

@jamielsharief jamielsharief changed the title Creating tables in Database docker containers is slow Creating tables in Database docker containers is slow (M1 Preview) Jan 11, 2021
@stephen-turner stephen-turner added the area/m1 M1 preview builds label Jan 11, 2021
@stephen-turner
Copy link
Contributor

@jamielsharief Have you done the same test on an Intel Mac? I would be interested to know how much slower it is.

@jamielsharief
Copy link
Author

Yes I have, it is significantly slower than a Mac Book 2014 8GB ram.

@jamielsharief
Copy link
Author

jamielsharief commented Jan 11, 2021

To give you an idea when running PHP unit for a package, it took 3-4 minutes maybe on intel. On this m1 I had to stop it after 15 minutes. The unit test is dropping and recreating tables for each test.

Running this query only through TablePlus

CREATE TABLE `bookmarks` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`user_id` INT(11) NOT NULL,
	`title` VARCHAR(50) NOT NULL,
	`description` TEXT,
	`url` TEXT,
	`category` VARCHAR(80),
	`created` DATETIME NOT NULL,
	`modified` DATETIME NOT NULL,
	PRIMARY KEY (id),
	INDEX `user_id` (user_id)
) ENGINE = InnoDB

MySQL - m1: 655 ms
MySQL - intel MacBook 2014 8GB ram: 38ms.
MariaDb - m1: 330ms

@stephen-turner
Copy link
Contributor

Interesting, thank you.

@gyf304
Copy link

gyf304 commented Jan 11, 2021

Maybe related: evansm7/vftool#14
Virtualization.framework drops IRQs for CPUs other than CPU0, causing slow IO.
Adding irqfixup to kernel cmdline mitigates this.

@jamielsharief
Copy link
Author

Whilst other things can be faster, I have also noticed in Dockerfile RUN chown -R www-data:www-data /var/www and RUN chmod -R 0775 /var/www are painfully slow, it takes about approx 1 minute 30 seconds to run for a directory with 3684 files (find -type f | wc -l).
I don't remember these 1.5 minute pauses on my old intel machine, but I might be wrong.

@navidnadali
Copy link

Hello, Just to confirm we have replicated slow table creation/dropping on M1 Docker preview using mariadb docker image. It seems to be at least 4x to 5x slower. I have access to both a 16" MBP and a 13" M1 MBP that I can run any tests/diagnosis needed. Most likely I/O based

@gyf304
Copy link

gyf304 commented Jan 13, 2021

Before Docker guys add irqfixup to kernel args, setting CPU count to 1 will fix IO / IRQ issues. (You will lose CPU performance of course).

@jamielsharief
Copy link
Author

jamielsharief commented Jan 13, 2021

@gyf304 thanks, but I am not interested in a hack, I am reporting a possible problem with the preview version.

@gyf304
Copy link

gyf304 commented Jan 13, 2021

@gyf304 thanks, but I am not interested in a hack, I am reporting a possible problem with the preview version.

The only people who can actually fix this is Docker or Apple. I'm not affiliated with Docker or Apple so I cannot fix it on their behalf.

I'm offering a workaround which might not interest you in particular, but may help other people.

@jamielsharief
Copy link
Author

@gyf304 You are right sorry. changing kernel configuration in a docker configuration, that sounds like its going be more than a one liner, do you have. link to resource?

@jamielsharief
Copy link
Author

Sorry @gyf304 it was late last night here, and I thought you were talking about trying to modify the kernel on my system to get it to work. If it is a something that can be adjusted in docker or the docker image, then it would be very helpful to many developers.

@gyf304
Copy link

gyf304 commented Jan 14, 2021

The kernel argument is for the Linux VM started by Docker Desktop. You can probably modify the cmdline in the package (right-click, show package content) Contents/Resources/linuxkit/cmdline. The only problem is that the package itself is signed so modification to that will cause Docker Desktop to refuse to start.

@mnylen
Copy link

mnylen commented Jan 21, 2021

I also noticed that running tests (w/ database tables cleaned using DELETE statements between each test) against PostgreSQL image (postgres:11) is very slow with the Docker Preview on M1 MacBook Pro compared to running the tests against locally installed PostgreSQL, or even against the stable version running on Intel platform.

Perhaps this is similar issue than this? Some timings:

M1 MacBook Pro running the test suite against local PostgreSQL: 47 seconds
M1 MacBook Pro running the test suite against PostgreSQL running in Docker Preview: 4 minutes 36 seconds
Intel MacBook Pro w/ i9 processor running the tests against PostgreSQL running in Docker stable: 1 minute 30 seconds

Sorry that I can't provide any more details about the test setup.

@mnylen
Copy link

mnylen commented Feb 12, 2021

Updated to 3.1.0 (60984) and still experiencing the slowness. Unfortunately lowering the CPU count in the settings to 1 did not help for me.

@jamielsharief
Copy link
Author

Just a reminder on my comment above, whilst this is noticeable on Database systems, also experienced the same with running file based commands in the Dockerfile. e.g chmod

#5236 (comment)

@jamielsharief
Copy link
Author

jamielsharief commented Feb 15, 2021

I downloaded the latest Docker M1 Preview today which says it fixes the filesystem slowness, I am not sure but the databases such as MySQL and Maria still struggle at creating tables.

Kapture 2021-02-15 at 11 51 37

@stephen-turner
Copy link
Contributor

mysql is an Intel container, isn't it? That's always going to be slow on an M1 machine because the architecture has to be emulated by qemu.

@gyf304
Copy link

gyf304 commented Feb 16, 2021

It seems like the latest version added irqaffinity=0 cmdline to the kernel. The IO performance issue should be fixed now.

@jamielsharief
Copy link
Author

jamielsharief commented Feb 16, 2021

On the 11th Jan i published a quick benchmark test (above)

Today I used table plus on the most recent version to execute this statement and it gives the execution time.

CREATE TABLE `bookmarks` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`user_id` INT(11) NOT NULL,
	`title` VARCHAR(50) NOT NULL,
	`description` TEXT,
	`url` TEXT,
	`category` VARCHAR(80),
	`created` DATETIME NOT NULL,
	`modified` DATETIME NOT NULL,
	PRIMARY KEY (id),
	INDEX `user_id` (user_id)
) ENGINE = InnoDB
  • MariaDB docker image (official): 222ms.
  • MariaDB inside an Ubuntu VM (Parallels) with MariaDB installed: 10ms

As regards to the ins and outs of MySQL emulation, I cannot answer, I have built the image locally on my M1 and used buildx on remote server, inboth cases MySQL is considerably slower than MariaDB official images.

@jamielsharief
Copy link
Author

I just ran a build and the chmod and chown execution still seems very slow ( 1853 files), but I dont know how that compares to before or if this is expected.

image

@danielgindi
Copy link

Happens with mongodb too. Consistently taking 600-700ms instead of a few ms.

@jamielsharief
Copy link
Author

@danielgindi did you try on the Docker release from last Thursday (preview does not update, you need to reinstall) and are you using the official image of MongoDB?

@navidnadali
Copy link

Hello, I have access to a 16" Intel Macbook Pro and a 13" M1 currently with the first preview release of docker, Are we able to collectively come up with a few different tests I can do on both and then upgrade the M1 to the latest preview released last Thursday and essentially provide the results for all 3 variations?

I was thinking of the following:

  1. Create 1000 files
  2. Chown 1000 files
  3. Delete 1000 files
  4. MySQL Create 100 tables
  5. Insert 1000 Rows
  6. Drop 100 tables

Do you guys/girls have any other suggestions that would help? I'll do this over the weekend and report back..

@jamielsharief
Copy link
Author

@navidnadali I think use MariaDb since it has an official image and the creating tables is still very slow. What about network traffic speed?

@navidnadali
Copy link

I think we can expect very similar results between MariaDb and MySQL based on the nature of the issue but happy to test on that image as well. Ideally I'd like to limit it to the purpose of the issue that is slow I/O on the new M1 docker preview (I've not noticed any negative network performance so far).

@jamielsharief
Copy link
Author

@navidnadali From my original test for creating a table (as described above), MySQL took twice as long. I just mentioned MariaDB (i dont use it) because twice its been mentioned that there is no official image.

MySQL - m1: 655 ms
MySQL - intel MacBook 2014 8GB ram: 38ms.
MariaDb - m1: 330ms

Here is my docker image for MySQL https://hub.docker.com/repository/docker/jamielsharief/mysql source code also available on GitHub, its my first venture in compiling for different architectures. I think i got it right.

@B4nan
Copy link

B4nan commented Feb 18, 2021

I can confirm the slowness is observable in all of mysql/mariadb/postgres via docker. From what I saw, the problematic queries are creating tables and delete/truncate queries (especially truncate cascade in postgres) - the more indexes/FKs/constraints are on the table, the slower it gets (even on empty tables). From docker stats I see almost no load on the container, but things are taking seconds instead of milliseconds.

Ended up using homebrew to install the dependencies (not great as it is pretty much impossible to run mysql and mariadb via brew at the same time).

(I am on the latest docker preview, BigSur 11.2.1 (20D74), MBP M1 16gb)

edit: Also tried to play with number of CPUs and resources, same result. Even if I don't use explicit volumes and keep the defaults, or even when using delegated volume - same result.

@danielgindi
Copy link

danielgindi commented Feb 18, 2021

@danielgindi did you try on the Docker release from last Thursday (preview does not update, you need to reinstall) and are you using the official image of MongoDB?

@jamielsharief Yes and yes

@jnugh
Copy link

jnugh commented Feb 20, 2021

I did some measurements as well and tried to dig a little deeper to see what is actually going on. I started with running pg_test_fsync (postgres tool to test writes performance with fsync)

Locally installed version:

/opt/homebrew/Cellar/postgresql/13.2/bin/pg_test_fsync 
5 seconds per test
Direct I/O is not supported on this platform.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                     25096,743 ops/sec      40 usecs/op
        fdatasync                         25315,125 ops/sec      40 usecs/op
        fsync                             25325,369 ops/sec      39 usecs/op
        fsync_writethrough                   55,117 ops/sec   18143 usecs/op
        open_sync                         48275,359 ops/sec      21 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                     23969,437 ops/sec      42 usecs/op
        fdatasync                         44658,595 ops/sec      22 usecs/op
        fsync                             41987,408 ops/sec      24 usecs/op
        fsync_writethrough                   54,774 ops/sec   18257 usecs/op
        open_sync                         24331,770 ops/sec      41 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write         49191,864 ops/sec      20 usecs/op
         2 *  8kB open_sync writes        23965,957 ops/sec      42 usecs/op
         4 *  4kB open_sync writes        12443,790 ops/sec      80 usecs/op
         8 *  2kB open_sync writes         6311,889 ops/sec     158 usecs/op
        16 *  1kB open_sync writes         3162,668 ops/sec     316 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close               17635,059 ops/sec      57 usecs/op
        write, close, fsync               30637,555 ops/sec      33 usecs/op

Non-sync'ed 8kB writes:
        write                             49004,614 ops/sec      20 usecs/op

Trough docker:

docker run postgres /usr/lib/postgresql/13/bin/pg_test_fsync
5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                        53.242 ops/sec   18782 usecs/op
        fdatasync                            52.474 ops/sec   19057 usecs/op
        fsync                                25.774 ops/sec   38798 usecs/op
        fsync_writethrough                              n/a
        open_sync                            25.833 ops/sec   38710 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                        26.337 ops/sec   37969 usecs/op
        fdatasync                            53.094 ops/sec   18834 usecs/op
        fsync                                25.229 ops/sec   39637 usecs/op
        fsync_writethrough                              n/a
        open_sync                            12.856 ops/sec   77782 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write            25.734 ops/sec   38859 usecs/op
         2 *  8kB open_sync writes           12.891 ops/sec   77575 usecs/op
         4 *  4kB open_sync writes            6.472 ops/sec  154510 usecs/op
         8 *  2kB open_sync writes            3.234 ops/sec  309240 usecs/op
        16 *  1kB open_sync writes            1.596 ops/sec  626515 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                  25.349 ops/sec   39449 usecs/op
        write, close, fsync                  25.647 ops/sec   38991 usecs/op

Non-sync'ed 8kB writes:
        write                            444377.164 ops/sec       2 usecs/op

I tried to identify interesting metrics, mostly by searching for related issues and experimenting with different trace events. I have no idea if the things that looked interesting to me are actually relevant or if my interpretation is correct by any means 😄

I created a simple program that just opens a file with O_SYNC and writes 1GB to disk in 10kB chunks. Differences are insane (I never finished execution inside a VM).
While running I watched /proc/pid/wchan which contained jbd2_log_wait_commit most of the time. Most of the time is spent waiting for confirmation on journal writes. I feel like the average transaction commit time is quite high (33ms) as well as logging transaction time (38ms).

cat /proc/fs/jbd2/vda1-8/info                                                                                                                                               2021-02-19 22:22:09

32119 transactions (32068 requested), each up to 16384 blocks
average: 
  0ms waiting for transaction
  0ms request delay
  19ms running transaction
  0ms transaction was being locked
  0ms flushing data (in ordered mode)
  38ms logging transaction
  33392us average transaction commit time
  138 handles per transaction
  3 blocks per transaction
  5 logged blocks per transaction

Digging deeper (events/jbd2/jbd2_run_stats/enable, events/jbd2/jbd2_checkpoint_stats/enable) every entry more or less looked like these (logging always 38 which matches the value from above):

jbd2/vda1-8-520   [002] ...2  3185.566488: jbd2_checkpoint_stats: dev 254,1 tid 111211 chp_time 0 forced_to_close 0 written 0 dropped 3
jbd2/vda1-8-520   [002] ...1  3185.566578: jbd2_run_stats: dev 254,1 tid 111212 wait 0 request_delay 0 running 1 locked 0 flushing 0 logging 38 handle_count 5 blocks 3 blocks_logged 5

So moving one layer deeper: hctx0/busy shows .op=FLUSH, .cmd_flags=PREFLUSH, .rq_flags=FLUSH_SEQ|STATS, .state=in_flight, most of the time. So it takes long time to dispatch flush to the "hardware" (virtio driver).

Running blktrace and btt:

==================== All Devices ====================

            ALL           MIN           AVG           MAX           N
--------------- ------------- ------------- ------------- -----------

Q2Q               0.000000375   0.006123825   0.118600662         966
Q2G               0.000000208   0.002571336   0.119936520        1135
G2I               0.000000750   0.000005746   0.000081915         959
I2D               0.000001291   0.000016095   0.000055416         713
D2C               0.000202789   0.000509097   0.000925404         840
Q2C               0.000248497   0.004118357   0.068548951         840

==================== Device Overhead ====================

       DEV |       Q2G       G2I       Q2M       I2D       D2C
---------- | --------- --------- --------- --------- ---------
 (252,  0) |  84.3629%   0.1593%   0.0000%   0.3317%  12.3617%
---------- | --------- --------- --------- --------- ---------
   Overall |  84.3629%   0.1593%   0.0000%   0.3317%  12.3617%

As I don't run any parallel writes this is probably a good measure of the latency between the block driver and the hypervisor.

If I try something that does not fsync immediatly:

time dd if=/dev/zero of=/root/testfile bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.23082 s, 872 MB/s

real    0m1.258s
user    0m0.001s
sys     0m1.154s

This looks quite good. Especially compared to results from before mitigating the interrupt drop issue (ubuntu vm with irqbalance and no special cmdline):

time dd if=/dev/zero of=/root/testfile bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.05613 s, 265 MB/s

real    0m4.085s
user    0m0.001s
sys     0m1.588s

So significant improvements already 😄 . Looking at hctx0/busy while dd is running shows lots of WRITE ops. Seems like it uses the available bandwidth quite well.

I verified that this is equal to a VM running directly on Virtualization.framework using vftool. So very likely not a docker issue. I'm not sure how to analyze this further. I have no idea how the queue is structured and processed by the host. There is also not much documentation from Apple on this topic as far as I can tell. If anyone has suggestions on what to check/try let me know.

@jamielsharief
Copy link
Author

@jnugh I have installed ubuntu server through Parallels Desktop M1 Preview, and setup LXD, installed MySQL, MariaDB etc in containers, and the performance is great. So, I think it is a docker issue.

@jnugh
Copy link

jnugh commented Feb 20, 2021

@jamielsharief I just installed Parallels. It uses a custom hypervisor.
My measurements were made on a guest running in Apple's Virtualization.framework (https://developer.apple.com/documentation/virtualization) which is being used by docker, too.

@jamielsharief
Copy link
Author

@jnugh Sorry i must of missed the last line of your post, where you said, if I understand correctly you built a VM using the Apple virtualzation framework, and experienced the same thing. If that is the case, that is very interesting indeed.

@jnugh
Copy link

jnugh commented Feb 21, 2021

Exactly. After testing in a docker VM shell, I used evansm7/vftool which is a cli wrapper for the virtualization API.

@Tenzer
Copy link

Tenzer commented Feb 23, 2021

When the most recent preview version came out I tried to do some benchmarking on it specifically related to seeing slow IO performance with MariaDB. I posted it on here, in case it would be interesting to reproduce: docker/roadmap#142 (comment)

@jnugh
Copy link

jnugh commented Feb 23, 2021

Awesome thanks for the reference 😄

docker/roadmap#142 (comment) (It's probably something that needs to be fixed by Apple) was also my very uneducated (as I have no idea how hypervisors usually handle disk IO) conclusion after doing some additional debugging using the XCode profiler on the new hypervisor. It looked very inefficient in case of sync writes because even if the guest was only writing to disk it did more reads than writes on the image file. Maybe performance could be improved by disabling the journal and switching the database to not buffered writes which is probably okay for a dev environment.

@jamielsharief
Copy link
Author

jamielsharief commented Feb 25, 2021

I have a Dockerfile which uses a setup script

FROM ubuntu:20.04

ENV DATE_TIMEZONE UTC
ENV DEBIAN_FRONTEND=noninteractive

COPY . /var/www
WORKDIR /var/www
RUN scripts/setup.sh --docker

CMD ["/usr/sbin/apache2ctl", "-DFOREGROUND"] 

This is is the start of the setup script

apt-get update -y
apt-get upgrade -y
apt-get install -y curl git nano unzip rsync zip apache2 libapache2-mod-php php php-apcu php-cli php-common php-curl php-intl php-json php-mailparse php-mbstring php-mysql php-opcache php-pear php-readline php-xml php-zip npm sqlite3 php-sqlite3 cron

It is just so painfully slow, more than 10 minutes, can somebody from docker address what is the situation here, is this is a docker problem or apple problem as suggested by @jnugh ?

@danielgindi
Copy link

Has someone filed a bug with Apple?

@stephen-turner
Copy link
Contributor

Yes, we (Docker) have filed a ticket with Apple and indeed have had a call with their developers about it.

We're also wondering if we can speed it up by flushing less often: there would be a slight risk of lost data if there was a power cut or something, but in a development context that might be an acceptable trade-off.

@stephen-turner
Copy link
Contributor

See also #5389.

@jnugh
Copy link

jnugh commented Feb 25, 2021

@stephen-turner Awesome - would it actually be possible for docker to prevent syncs? Maybe by patching the vm kernel. I just tried to monkeypatch fsync in a test 😸

@jamielsharief I tested with your commands. Building a shard object that routes fsync calls to basically no-ops. (Took the code from here: https://www.mjr19.org.uk/IT/apt-get_fsync.html)

I built the shared object in a different container so I did not have any additional packages installed in the second run (some packages would have been installed as a dependency of build-essential).

Default (probably what you are seeing, cancelled at some point):

 => [4/4] RUN ./setup.sh --docker  424.5s

Prevent fsyncs (finished):

  => [5/5] RUN ./setup.sh --docker   86.2s

Fast track to reproduce: add this to your Dockerfile, be aware that it actually loads the no-op fsync all the time:

COPY libnosync.so /tmp/libnosync.so
ENV LD_PRELOAD=/tmp/libnosync.so

Alternative only for your script:

RUN LD_PRELOAD=/tmp/libnosync.so ./setup.sh --docker

libnosync.so.zip

If you don't trust to load random so files form the internet:

docker run -it ubuntu:20.04 bash
apt update
apt install -y build-essential
# get the c source into the container (e.g. by using docker cp or an editor like vim)
gcc -c -fPIC nosync.c
gcc -shared -o libnosync.so nosync.o
# Copy the so file from the container and use for other projects

C source (copied from the link):

#include <fcntl.h>

int fsync(int fd){
  if (fcntl(fd,F_GETFL)==-1) return 1; /* Will set errno appropriately */
  return 0;
}

Keep in mind that his only changes fsync behavior. There seem to be other solutions like this: https://www.flamingspork.com/projects/libeatmydata/ which claims to also patch e.g. open with O_SYNC which might be a solution for dev database containers, too. But I'd rather want to wait for apple to fix this. There is probably a reason for naming the tool "eat my data" 😄

@jamielsharief
Copy link
Author

jamielsharief commented Feb 25, 2021

@jnugh Thats amazing, ty for spending the time on that. This particular docker image, which takes 722 seconds to run the setup script, is for an open source project, so i cant really patch it with this wizardary ;)

@jamielsharief
Copy link
Author

@jnugh I just checked the advanced settings of my VMs in parallels, and it seems to be using the Apple virtualisation under the hood, which does not make sense, since it does not experience the same the problem.

image

image

@jnugh
Copy link

jnugh commented Feb 26, 2021

There are two different Hypervisor APIs https://developer.apple.com/documentation/hypervisor (probably used by parallels?) and https://developer.apple.com/documentation/virtualization (used by docker).

I don't know how much content is shared between them. When I ran Parallels to test. I was not seeing the the process from Virtualization Framework but I also did not check if there were multiple hypervisors available. So if Apple was the default selection I'm pretty sure that it does not use the Virtualization framework which is used by Docker or vftool 🤔

@jamielsharief
Copy link
Author

@jnugh Fair enough, I heard the virtualisation was a new feature, and assumed it was the same thing.

@perrydrums
Copy link

Until this issue is fixed, I'm just running my database outside of the container; https://dev.to/perry/fix-slow-docker-databases-for-apple-silicon-m1-chips-2ip1

Not a perfect solution, but at least I can work with big databases in the meantime :)

@moulinraphael
Copy link

moulinraphael commented Mar 23, 2021

Do you have any solution about this performance issue ? (Except using a database outside of the container ^^)

@jamielsharief
Copy link
Author

I setup LXC containers for MySQL, MariaDB and Postgres

@jnugh
Copy link

jnugh commented Mar 27, 2021

On the newly released RC2 version using qemu (https://docs.docker.com/docker-for-mac/apple-m1/):

docker run postgres /usr/lib/postgresql/13/bin/pg_test_fsync

5 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      8800.652 ops/sec     114 usecs/op
        fdatasync                          7539.149 ops/sec     133 usecs/op
        fsync                              5664.694 ops/sec     177 usecs/op
        fsync_writethrough                              n/a
        open_sync                          5217.622 ops/sec     192 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync is Linux's default)
        open_datasync                      3975.366 ops/sec     252 usecs/op
        fdatasync                          7754.994 ops/sec     129 usecs/op
        fsync                              5503.553 ops/sec     182 usecs/op
        fsync_writethrough                              n/a
        open_sync                          3209.714 ops/sec     312 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB in different write
open_sync sizes.)
         1 * 16kB open_sync write          6303.307 ops/sec     159 usecs/op
         2 *  8kB open_sync writes         2909.971 ops/sec     344 usecs/op
         4 *  4kB open_sync writes         1639.042 ops/sec     610 usecs/op
         8 *  2kB open_sync writes          737.339 ops/sec    1356 usecs/op
        16 *  1kB open_sync writes          397.247 ops/sec    2517 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written on a different
descriptor.)
        write, fsync, close                5117.824 ops/sec     195 usecs/op
        write, close, fsync                4810.348 ops/sec     208 usecs/op

Non-sync'ed 8kB writes:
        write                            361207.594 ops/sec       3 usecs/op

100-200x speedup in the postgres fsync benchmarks compared to the previous release with virtualization framework 👍🏽

@maxekman
Copy link

Similar numbers for me on MBA M1, and projects using DBs feels faster too!

@chinanderm
Copy link

Much much much better on an M1 machine 🎉

I have a NodeJS project that utilizes Prisma 2 and I use their "seed" feature as a means of populating my local database. Prior to this change to Docker, that process, for me, took about five minutes. Now it takes 50 seconds 😮

@jamielsharief
Copy link
Author

Originally when i reported the problem , it took 655ms to run the following MySQL statement, I can now confirm that in the latest version (a few days ago), this now is only 35ms. Thanks.

CREATE TABLE `bookmarks` (
	`id` INT(11) NOT NULL AUTO_INCREMENT,
	`user_id` INT(11) NOT NULL,
	`title` VARCHAR(50) NOT NULL,
	`description` TEXT,
	`url` TEXT,
	`category` VARCHAR(80),
	`created` DATETIME NOT NULL,
	`modified` DATETIME NOT NULL,
	PRIMARY KEY (id),
	INDEX `user_id` (user_id)
) ENGINE = InnoDB

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Apr 27, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests