diff --git a/locale/uk/docs/guides/anatomy-of-an-http-transaction.md b/locale/uk/docs/guides/anatomy-of-an-http-transaction.md
deleted file mode 100644
index 289514b9c5537..0000000000000
--- a/locale/uk/docs/guides/anatomy-of-an-http-transaction.md
+++ /dev/null
@@ -1,430 +0,0 @@
----
-title: Anatomy of an HTTP Transaction
-layout: docs.hbs
----
-
-# Anatomy of an HTTP Transaction
-
-The purpose of this guide is to impart a solid understanding of the process of
-Node.js HTTP handling. We'll assume that you know, in a general sense, how HTTP
-requests work, regardless of language or programming environment. We'll also
-assume a bit of familiarity with Node.js [`EventEmitters`][] and [`Streams`][].
-If you're not quite familiar with them, it's worth taking a quick read through
-the API docs for each of those.
-
-## Create the Server
-
-Any node web server application will at some point have to create a web server
-object. This is done by using [`createServer`][].
-
-```javascript
-const http = require('http');
-
-const server = http.createServer((request, response) => {
- // magic happens here!
-});
-```
-
-The function that's passed in to [`createServer`][] is called once for every
-HTTP request that's made against that server, so it's called the request
-handler. In fact, the [`Server`][] object returned by [`createServer`][] is an
-[`EventEmitter`][], and what we have here is just shorthand for creating a
-`server` object and then adding the listener later.
-
-```javascript
-const server = http.createServer();
-server.on('request', (request, response) => {
- // the same kind of magic happens here!
-});
-```
-
-When an HTTP request hits the server, node calls the request handler function
-with a few handy objects for dealing with the transaction, `request` and
-`response`. We'll get to those shortly.
-
-In order to actually serve requests, the [`listen`][] method needs to be called
-on the `server` object. In most cases, all you'll need to pass to `listen` is
-the port number you want the server to listen on. There are some other options
-too, so consult the [API reference][].
-
-## Method, URL and Headers
-
-When handling a request, the first thing you'll probably want to do is look at
-the method and URL, so that appropriate actions can be taken. Node makes this
-relatively painless by putting handy properties onto the `request` object.
-
-```javascript
-const { method, url } = request;
-```
-> **Note:** The `request` object is an instance of [`IncomingMessage`][].
-
-The `method` here will always be a normal HTTP method/verb. The `url` is the
-full URL without the server, protocol or port. For a typical URL, this means
-everything after and including the third forward slash.
-
-Headers are also not far away. They're in their own object on `request` called
-`headers`.
-
-```javascript
-const { headers } = request;
-const userAgent = headers['user-agent'];
-```
-
-It's important to note here that all headers are represented in lower-case only,
-regardless of how the client actually sent them. This simplifies the task of
-parsing headers for whatever purpose.
-
-If some headers are repeated, then their values are overwritten or joined
-together as comma-separated strings, depending on the header. In some cases,
-this can be problematic, so [`rawHeaders`][] is also available.
-
-## Request Body
-
-When receiving a `POST` or `PUT` request, the request body might be important to
-your application. Getting at the body data is a little more involved than
-accessing request headers. The `request` object that's passed in to a handler
-implements the [`ReadableStream`][] interface. This stream can be listened to or
-piped elsewhere just like any other stream. We can grab the data right out of
-the stream by listening to the stream's `'data'` and `'end'` events.
-
-The chunk emitted in each `'data'` event is a [`Buffer`][]. If you know it's
-going to be string data, the best thing to do is collect the data in an array,
-then at the `'end'`, concatenate and stringify it.
-
-```javascript
-let body = [];
-request.on('data', (chunk) => {
- body.push(chunk);
-}).on('end', () => {
- body = Buffer.concat(body).toString();
- // at this point, `body` has the entire request body stored in it as a string
-});
-```
-
-> **Note:** This may seem a tad tedious, and in many cases, it is. Luckily,
-there are modules like [`concat-stream`][] and [`body`][] on [`npm`][] which can
-help hide away some of this logic. It's important to have a good understanding
-of what's going on before going down that road, and that's why you're here!
-
-## A Quick Thing About Errors
-
-Since the `request` object is a [`ReadableStream`][], it's also an
-[`EventEmitter`][] and behaves like one when an error happens.
-
-An error in the `request` stream presents itself by emitting an `'error'` event
-on the stream. **If you don't have a listener for that event, the error will be
-*thrown*, which could crash your Node.js program.** You should therefore add an
-`'error'` listener on your request streams, even if you just log it and
-continue on your way. (Though it's probably best to send some kind of HTTP error
-response. More on that later.)
-
-```javascript
-request.on('error', (err) => {
- // This prints the error message and stack trace to `stderr`.
- console.error(err.stack);
-});
-```
-
-There are other ways of [handling these errors][] such as
-other abstractions and tools, but always be aware that errors can and do happen,
-and you're going to have to deal with them.
-
-## What We've Got so Far
-
-At this point, we've covered creating a server, and grabbing the method, URL,
-headers and body out of requests. When we put that all together, it might look
-something like this:
-
-```javascript
-const http = require('http');
-
-http.createServer((request, response) => {
- const { headers, method, url } = request;
- let body = [];
- request.on('error', (err) => {
- console.error(err);
- }).on('data', (chunk) => {
- body.push(chunk);
- }).on('end', () => {
- body = Buffer.concat(body).toString();
- // At this point, we have the headers, method, url and body, and can now
- // do whatever we need to in order to respond to this request.
- });
-}).listen(8080); // Activates this server, listening on port 8080.
-```
-
-If we run this example, we'll be able to *receive* requests, but not *respond*
-to them. In fact, if you hit this example in a web browser, your request would
-time out, as nothing is being sent back to the client.
-
-So far we haven't touched on the `response` object at all, which is an instance
-of [`ServerResponse`][], which is a [`WritableStream`][]. It contains many
-useful methods for sending data back to the client. We'll cover that next.
-
-## HTTP Status Code
-
-If you don't bother setting it, the HTTP status code on a response will always
-be 200. Of course, not every HTTP response warrants this, and at some point
-you'll definitely want to send a different status code. To do that, you can set
-the `statusCode` property.
-
-```javascript
-response.statusCode = 404; // Tell the client that the resource wasn't found.
-```
-
-There are some other shortcuts to this, as we'll see soon.
-
-## Setting Response Headers
-
-Headers are set through a convenient method called [`setHeader`][].
-
-```javascript
-response.setHeader('Content-Type', 'application/json');
-response.setHeader('X-Powered-By', 'bacon');
-```
-
-When setting the headers on a response, the case is insensitive on their names.
-If you set a header repeatedly, the last value you set is the value that gets
-sent.
-
-## Explicitly Sending Header Data
-
-The methods of setting the headers and status code that we've already discussed
-assume that you're using "implicit headers". This means you're counting on node
-to send the headers for you at the correct time before you start sending body
-data.
-
-If you want, you can *explicitly* write the headers to the response stream.
-To do this, there's a method called [`writeHead`][], which writes the status
-code and the headers to the stream.
-
-```javascript
-response.writeHead(200, {
- 'Content-Type': 'application/json',
- 'X-Powered-By': 'bacon'
-});
-```
-
-Once you've set the headers (either implicitly or explicitly), you're ready to
-start sending response data.
-
-## Sending Response Body
-
-Since the `response` object is a [`WritableStream`][], writing a response body
-out to the client is just a matter of using the usual stream methods.
-
-```javascript
-response.write('');
-response.write('
');
-response.write('Hello, World!
');
-response.write('');
-response.write('');
-response.end();
-```
-
-The `end` function on streams can also take in some optional data to send as the
-last bit of data on the stream, so we can simplify the example above as follows.
-
-```javascript
-response.end('Hello, World!
');
-```
-
-> **Note:** It's important to set the status and headers *before* you start
-writing chunks of data to the body. This makes sense, since headers come before
-the body in HTTP responses.
-
-## Another Quick Thing About Errors
-
-The `response` stream can also emit `'error'` events, and at some point you're
-going to have to deal with that as well. All of the advice for `request` stream
-errors still applies here.
-
-## Put It All Together
-
-Now that we've learned about making HTTP responses, let's put it all together.
-Building on the earlier example, we're going to make a server that sends back
-all of the data that was sent to us by the user. We'll format that data as JSON
-using `JSON.stringify`.
-
-```javascript
-const http = require('http');
-
-http.createServer((request, response) => {
- const { headers, method, url } = request;
- let body = [];
- request.on('error', (err) => {
- console.error(err);
- }).on('data', (chunk) => {
- body.push(chunk);
- }).on('end', () => {
- body = Buffer.concat(body).toString();
- // BEGINNING OF NEW STUFF
-
- response.on('error', (err) => {
- console.error(err);
- });
-
- response.statusCode = 200;
- response.setHeader('Content-Type', 'application/json');
- // Note: the 2 lines above could be replaced with this next one:
- // response.writeHead(200, {'Content-Type': 'application/json'})
-
- const responseBody = { headers, method, url, body };
-
- response.write(JSON.stringify(responseBody));
- response.end();
- // Note: the 2 lines above could be replaced with this next one:
- // response.end(JSON.stringify(responseBody))
-
- // END OF NEW STUFF
- });
-}).listen(8080);
-```
-
-## Echo Server Example
-
-Let's simplify the previous example to make a simple echo server, which just
-sends whatever data is received in the request right back in the response. All
-we need to do is grab the data from the request stream and write that data to
-the response stream, similar to what we did previously.
-
-```javascript
-const http = require('http');
-
-http.createServer((request, response) => {
- let body = [];
- request.on('data', (chunk) => {
- body.push(chunk);
- }).on('end', () => {
- body = Buffer.concat(body).toString();
- response.end(body);
- });
-}).listen(8080);
-```
-
-Now let's tweak this. We want to only send an echo under the following
-conditions:
-
-* The request method is POST.
-* The URL is `/echo`.
-
-In any other case, we want to simply respond with a 404.
-
-```javascript
-const http = require('http');
-
-http.createServer((request, response) => {
- if (request.method === 'POST' && request.url === '/echo') {
- let body = [];
- request.on('data', (chunk) => {
- body.push(chunk);
- }).on('end', () => {
- body = Buffer.concat(body).toString();
- response.end(body);
- });
- } else {
- response.statusCode = 404;
- response.end();
- }
-}).listen(8080);
-```
-
-> **Note:** By checking the URL in this way, we're doing a form of "routing".
-Other forms of routing can be as simple as `switch` statements or as complex as
-whole frameworks like [`express`][]. If you're looking for something that does
-routing and nothing else, try [`router`][].
-
-Great! Now let's take a stab at simplifying this. Remember, the `request` object
-is a [`ReadableStream`][] and the `response` object is a [`WritableStream`][].
-That means we can use [`pipe`][] to direct data from one to the other. That's
-exactly what we want for an echo server!
-
-```javascript
-const http = require('http');
-
-http.createServer((request, response) => {
- if (request.method === 'POST' && request.url === '/echo') {
- request.pipe(response);
- } else {
- response.statusCode = 404;
- response.end();
- }
-}).listen(8080);
-```
-
-Yay streams!
-
-We're not quite done yet though. As mentioned multiple times in this guide,
-errors can and do happen, and we need to deal with them.
-
-To handle errors on the request stream, we'll log the error to `stderr` and send
-a 400 status code to indicate a `Bad Request`. In a real-world application,
-though, we'd want to inspect the error to figure out what the correct status code
-and message would be. As usual with errors, you should consult the
-[`Error` documentation][].
-
-On the response, we'll just log the error to `stderr`.
-
-```javascript
-const http = require('http');
-
-http.createServer((request, response) => {
- request.on('error', (err) => {
- console.error(err);
- response.statusCode = 400;
- response.end();
- });
- response.on('error', (err) => {
- console.error(err);
- });
- if (request.method === 'POST' && request.url === '/echo') {
- request.pipe(response);
- } else {
- response.statusCode = 404;
- response.end();
- }
-}).listen(8080);
-```
-
-We've now covered most of the basics of handling HTTP requests. At this point,
-you should be able to:
-
-* Instantiate an HTTP server with a request handler function, and have it listen
-on a port.
-* Get headers, URL, method and body data from `request` objects.
-* Make routing decisions based on URL and/or other data in `request` objects.
-* Send headers, HTTP status codes and body data via `response` objects.
-* Pipe data from `request` objects and to `response` objects.
-* Handle stream errors in both the `request` and `response` streams.
-
-From these basics, Node.js HTTP servers for many typical use cases can be
-constructed. There are plenty of other things these APIs provide, so be sure to
-read through the API docs for [`EventEmitters`][], [`Streams`][], and [`HTTP`][].
-
-
-
-[`EventEmitters`]: https://nodejs.org/api/events.html
-[`Streams`]: https://nodejs.org/api/stream.html
-[`createServer`]: https://nodejs.org/api/http.html#http_http_createserver_requestlistener
-[`Server`]: https://nodejs.org/api/http.html#http_class_http_server
-[`listen`]: https://nodejs.org/api/http.html#http_server_listen_port_hostname_backlog_callback
-[API reference]: https://nodejs.org/api/http.html
-[`IncomingMessage`]: https://nodejs.org/api/http.html#http_class_http_incomingmessage
-[`ReadableStream`]: https://nodejs.org/api/stream.html#stream_class_stream_readable
-[`rawHeaders`]: https://nodejs.org/api/http.html#http_message_rawheaders
-[`Buffer`]: https://nodejs.org/api/buffer.html
-[`concat-stream`]: https://www.npmjs.com/package/concat-stream
-[`body`]: https://www.npmjs.com/package/body
-[`npm`]: https://www.npmjs.com
-[`EventEmitter`]: https://nodejs.org/api/events.html#events_class_eventemitter
-[handling these errors]: https://nodejs.org/api/errors.html
-[`domains`]: https://nodejs.org/api/domain.html
-[`ServerResponse`]: https://nodejs.org/api/http.html#http_class_http_serverresponse
-[`setHeader`]: https://nodejs.org/api/http.html#http_response_setheader_name_value
-[`WritableStream`]: https://nodejs.org/api/stream.html#stream_class_stream_writable
-[`writeHead`]: https://nodejs.org/api/http.html#http_response_writehead_statuscode_statusmessage_headers
-[`express`]: https://www.npmjs.com/package/express
-[`router`]: https://www.npmjs.com/package/router
-[`pipe`]: https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
-[`Error` documentation]: https://nodejs.org/api/errors.html
-[`HTTP`]: https://nodejs.org/api/http.html
diff --git a/locale/uk/docs/guides/index.md b/locale/uk/docs/guides/index.md
deleted file mode 100644
index 68498ec32f839..0000000000000
--- a/locale/uk/docs/guides/index.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Керівництва
-layout: docs.hbs
----
-
-# Guides
-
-- [Easy profiling for Node.js Applications](simple-profiling/)
-- [Dockerizing a Node.js web app](nodejs-docker-webapp/)
-- [Anatomy of an HTTP Transaction](anatomy-of-an-http-transaction/)
-- [Working with Different Filesystems](working-with-different-filesystems/)
diff --git a/locale/uk/docs/guides/nodejs-docker-webapp.md b/locale/uk/docs/guides/nodejs-docker-webapp.md
deleted file mode 100644
index 434154a2f0535..0000000000000
--- a/locale/uk/docs/guides/nodejs-docker-webapp.md
+++ /dev/null
@@ -1,274 +0,0 @@
----
-title: Dockerizing a Node.js web app
-layout: docs.hbs
----
-
-# Dockerizing a Node.js web app
-
-The goal of this example is to show you how to get a Node.js application into a
-Docker container. The guide is intended for development, and *not* for a
-production deployment. The guide also assumes you have a working [Docker
-installation](https://docs.docker.com/engine/installation/) and a basic
-understanding of how a Node.js application is structured.
-
-In the first part of this guide we will create a simple web application in
-Node.js, then we will build a Docker image for that application, and lastly we
-will run the image as a container.
-
-Docker allows you to package an application with all of its dependencies into a
-standardized unit, called a container, for software development. A container is
-a stripped-to-basics version of a Linux operating system. An image is software
-you load into a container.
-
-## Create the Node.js app
-
-First, create a new directory where all the files would live. In this directory
-create a `package.json` file that describes your app and its dependencies:
-
-```json
-{
- "name": "docker_web_app",
- "version": "1.0.0",
- "description": "Node.js on Docker",
- "author": "First Last ",
- "main": "server.js",
- "scripts": {
- "start": "node server.js"
- },
- "dependencies": {
- "express": "^4.16.1"
- }
-}
-```
-
-With your new `package.json` file, run `npm install`. If you are using `npm`
-version 5 or later, this will generate a `package-lock.json` file which will be copied
-to your Docker image.
-
-Then, create a `server.js` file that defines a web app using the
-[Express.js](https://expressjs.com/) framework:
-
-```javascript
-'use strict';
-
-const express = require('express');
-
-// Constants
-const PORT = 8080;
-const HOST = '0.0.0.0';
-
-// App
-const app = express();
-app.get('/', (req, res) => {
- res.send('Hello world\n');
-});
-
-app.listen(PORT, HOST);
-console.log(`Running on http://${HOST}:${PORT}`);
-```
-
-In the next steps, we'll look at how you can run this app inside a Docker
-container using the official Docker image. First, you'll need to build a Docker
-image of your app.
-
-## Creating a Dockerfile
-
-Create an empty file called `Dockerfile`:
-
-```markup
-touch Dockerfile
-```
-
-Open the `Dockerfile` in your favorite text editor
-
-The first thing we need to do is define from what image we want to build from.
-Here we will use the latest LTS (long term support) version `8` of `node`
-available from the [Docker Hub](https://hub.docker.com/):
-
-```docker
-FROM node:8
-```
-
-Next we create a directory to hold the application code inside the image, this
-will be the working directory for your application:
-
-```docker
-# Create app directory
-WORKDIR /usr/src/app
-```
-
-This image comes with Node.js and NPM already installed so the next thing we
-need to do is to install your app dependencies using the `npm` binary. Please
-note that if you are using `npm` version 4 or earlier a `package-lock.json`
-file will *not* be generated.
-
-```docker
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY package*.json ./
-
-RUN npm install
-# If you are building your code for production
-# RUN npm install --only=production
-```
-
-Note that, rather than copying the entire working directory, we are only copying
-the `package.json` file. This allows us to take advantage of cached Docker
-layers. bitJudo has a good explanation of this
-[here](http://bitjudo.com/blog/2014/03/13/building-efficient-dockerfiles-node-dot-js/).
-
-To bundle your app's source code inside the Docker image, use the `COPY`
-instruction:
-
-```docker
-# Bundle app source
-COPY . .
-```
-
-Your app binds to port `8080` so you'll use the `EXPOSE` instruction to have it
-mapped by the `docker` daemon:
-
-```docker
-EXPOSE 8080
-```
-
-Last but not least, define the command to run your app using `CMD` which defines
-your runtime. Here we will use the basic `npm start` which will run
-`node server.js` to start your server:
-
-```docker
-CMD [ "npm", "start" ]
-```
-
-Your `Dockerfile` should now look like this:
-
-```docker
-FROM node:8
-
-# Create app directory
-WORKDIR /usr/src/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY package*.json ./
-
-RUN npm install
-# If you are building your code for production
-# RUN npm install --only=production
-
-# Bundle app source
-COPY . .
-
-EXPOSE 8080
-CMD [ "npm", "start" ]
-```
-
-## .dockerignore file
-
-Create a `.dockerignore` file in the same directory as your `Dockerfile`
-with following content:
-
-```
-node_modules
-npm-debug.log
-```
-
-This will prevent your local modules and debug logs from being copied onto your
-Docker image and possibly overwriting modules installed within your image.
-
-## Building your image
-
-Go to the directory that has your `Dockerfile` and run the following command to
-build the Docker image. The `-t` flag lets you tag your image so it's easier to
-find later using the `docker images` command:
-
-```bash
-$ docker build -t /node-web-app .
-```
-
-Your image will now be listed by Docker:
-
-```bash
-$ docker images
-
-# Example
-REPOSITORY TAG ID CREATED
-node 8 1934b0b038d1 5 days ago
-/node-web-app latest d64d3505b0d2 1 minute ago
-```
-
-## Run the image
-
-Running your image with `-d` runs the container in detached mode, leaving the
-container running in the background. The `-p` flag redirects a public port to a
-private port inside the container. Run the image you previously built:
-
-```bash
-$ docker run -p 49160:8080 -d /node-web-app
-```
-
-Print the output of your app:
-
-```bash
-# Get container ID
-$ docker ps
-
-# Print app output
-$ docker logs
-
-# Example
-Running on http://localhost:8080
-```
-
-If you need to go inside the container you can use the `exec` command:
-
-```bash
-# Enter the container
-$ docker exec -it /bin/bash
-```
-
-## Test
-
-To test your app, get the port of your app that Docker mapped:
-
-```bash
-$ docker ps
-
-# Example
-ID IMAGE COMMAND ... PORTS
-ecce33b30ebf /node-web-app:latest npm start ... 49160->8080
-```
-
-In the example above, Docker mapped the `8080` port inside of the container to
-the port `49160` on your machine.
-
-Now you can call your app using `curl` (install if needed via: `sudo apt-get
-install curl`):
-
-```bash
-$ curl -i localhost:49160
-
-HTTP/1.1 200 OK
-X-Powered-By: Express
-Content-Type: text/html; charset=utf-8
-Content-Length: 12
-ETag: W/"c-M6tWOb/Y57lesdjQuHeB1P/qTV0"
-Date: Mon, 13 Nov 2017 20:53:59 GMT
-Connection: keep-alive
-
-Hello world
-```
-
-We hope this tutorial helped you get up and running a simple Node.js application
-on Docker.
-
-You can find more information about Docker and Node.js on Docker in the
-following places:
-
-* [Official Node.js Docker Image](https://hub.docker.com/_/node/)
-* [Node.js Docker Best Practices Guide](https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md)
-* [Official Docker documentation](https://docs.docker.com/)
-* [Docker Tag on Stack Overflow](https://stackoverflow.com/questions/tagged/docker)
-* [Docker Subreddit](https://reddit.com/r/docker)
diff --git a/locale/uk/docs/guides/simple-profiling.md b/locale/uk/docs/guides/simple-profiling.md
deleted file mode 100644
index a980721f9dfa3..0000000000000
--- a/locale/uk/docs/guides/simple-profiling.md
+++ /dev/null
@@ -1,280 +0,0 @@
----
-title: Easy profiling for Node.js Applications
-layout: docs.hbs
----
-
-# Easy profiling for Node.js Applications
-
-There are many third party tools available for profiling Node.js applications
-but, in many cases, the easiest option is to use the Node.js built in profiler.
-The built in profiler uses the [profiler inside V8][] which samples the stack at
-regular intervals during program execution. It records the results of these
-samples, along with important optimization events such as jit compiles, as a
-series of ticks:
-
-```
-code-creation,LazyCompile,0,0x2d5000a337a0,396,"bp native array.js:1153:16",0x289f644df68,~
-code-creation,LazyCompile,0,0x2d5000a33940,716,"hasOwnProperty native v8natives.js:198:30",0x289f64438d0,~
-code-creation,LazyCompile,0,0x2d5000a33c20,284,"ToName native runtime.js:549:16",0x289f643bb28,~
-code-creation,Stub,2,0x2d5000a33d40,182,"DoubleToIStub"
-code-creation,Stub,2,0x2d5000a33e00,507,"NumberToStringStub"
-```
-
-In the past you need the V8 source code to be able to interpret the ticks.
-Luckily, tools have recently been introduced into Node.js 4.4.0 that facilitate
-the consumption of this information without separately building V8 from source.
-Let's see how the built-in profiler can help provide insight into application
-performance.
-
-To illustrate the use of the tick profiler, we will work with a simple Express
-application. Our application will have two handlers, one for adding new users to
-our system:
-
-```javascript
-app.get('/newUser', (req, res) => {
- let username = req.query.username || '';
- const password = req.query.password || '';
-
- username = username.replace(/[!@#$%^&*]/g, '');
-
- if (!username || !password || users.username) {
- return res.sendStatus(400);
- }
-
- const salt = crypto.randomBytes(128).toString('base64');
- const hash = crypto.pbkdf2Sync(password, salt, 10000, 512, 'sha512');
-
- users[username] = { salt, hash };
-
- res.sendStatus(200);
-});
-```
-
-and another for validating user authentication attempts:
-
-```javascript
-app.get('/auth', (req, res) => {
- let username = req.query.username || '';
- const password = req.query.password || '';
-
- username = username.replace(/[!@#$%^&*]/g, '');
-
- if (!username || !password || !users[username]) {
- return res.sendStatus(400);
- }
-
- const { salt, hash } = users[username];
- const encryptHash = crypto.pbkdf2Sync(password, salt, 10000, 512, 'sha512');
-
- if (crypto.timingSafeEqual(hash, encryptHash)) {
- res.sendStatus(200);
- } else {
- res.sendStatus(401);
- }
-});
-```
-
-*Please note that these are NOT recommended handlers for authenticating users in
-your Node.js applications and are used purely for illustration purposes. You
-should not be trying to design your own cryptographic authentication mechanisms
-in general. It is much better to use existing, proven authentication solutions.*
-
-Now assume that we've deployed our application and users are complaining about
-high latency on requests. We can easily run the app with the built in profiler:
-
-```
-NODE_ENV=production node --prof app.js
-```
-
-and put some load on the server using `ab` (ApacheBench):
-
-```
-curl -X GET "http://localhost:8080/newUser?username=matt&password=password"
-ab -k -c 20 -n 250 "http://localhost:8080/auth?username=matt&password=password"
-```
-
-and get an ab output of:
-
-```
-Concurrency Level: 20
-Time taken for tests: 46.932 seconds
-Complete requests: 250
-Failed requests: 0
-Keep-Alive requests: 250
-Total transferred: 50250 bytes
-HTML transferred: 500 bytes
-Requests per second: 5.33 [#/sec] (mean)
-Time per request: 3754.556 [ms] (mean)
-Time per request: 187.728 [ms] (mean, across all concurrent requests)
-Transfer rate: 1.05 [Kbytes/sec] received
-
-...
-
-Percentage of the requests served within a certain time (ms)
- 50% 3755
- 66% 3804
- 75% 3818
- 80% 3825
- 90% 3845
- 95% 3858
- 98% 3874
- 99% 3875
- 100% 4225 (longest request)
-```
-
-From this output, we see that we're only managing to serve about 5 requests per
-second and that the average request takes just under 4 seconds round trip. In a
-real world example, we could be doing lots of work in many functions on behalf
-of a user request but even in our simple example, time could be lost compiling
-regular expressions, generating random salts, generating unique hashes from user
-passwords, or inside the Express framework itself.
-
-Since we ran our application using the `--prof` option, a tick file was generated
-in the same directory as your local run of the application. It should have the
-form `isolate-0xnnnnnnnnnnnn-v8.log` (where `n` is a digit).
-
-In order to make sense of this file, we need to use the tick processor bundled
-with the Node.js binary. To run the processor, use the `--prof-process` flag:
-
-```
-node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt
-```
-
-Opening processed.txt in your favorite text editor will give you a few different
-types of information. The file is broken up into sections which are again broken
-up by language. First, we look at the summary section and see:
-
-```
- [Summary]:
- ticks total nonlib name
- 79 0.2% 0.2% JavaScript
- 36703 97.2% 99.2% C++
- 7 0.0% 0.0% GC
- 767 2.0% Shared libraries
- 215 0.6% Unaccounted
-```
-
-This tells us that 97% of all samples gathered occurred in C++ code and that
-when viewing other sections of the processed output we should pay most attention
-to work being done in C++ (as opposed to JavaScript). With this in mind, we next
-find the [C++] section which contains information about which C++ functions are
-taking the most CPU time and see:
-
-```
- [C++]:
- ticks total nonlib name
- 19557 51.8% 52.9% node::crypto::PBKDF2(v8::FunctionCallbackInfo const&)
- 4510 11.9% 12.2% _sha1_block_data_order
- 3165 8.4% 8.6% _malloc_zone_malloc
-```
-
-We see that the top 3 entries account for 72.1% of CPU time taken by the
-program. From this output, we immediately see that at least 51.8% of CPU time is
-taken up by a function called PBKDF2 which corresponds to our hash generation
-from a user's password. However, it may not be immediately obvious how the lower
-two entries factor into our application (or if it is we will pretend otherwise
-for the sake of example). To better understand the relationship between these
-functions, we will next look at the [Bottom up (heavy) profile] section which
-provides information about the primary callers of each function. Examining this
-section, we find:
-
-```
- ticks parent name
- 19557 51.8% node::crypto::PBKDF2(v8::FunctionCallbackInfo const&)
- 19557 100.0% v8::internal::Builtins::~Builtins()
- 19557 100.0% LazyCompile: ~pbkdf2 crypto.js:557:16
-
- 4510 11.9% _sha1_block_data_order
- 4510 100.0% LazyCompile: *pbkdf2 crypto.js:557:16
- 4510 100.0% LazyCompile: *exports.pbkdf2Sync crypto.js:552:30
-
- 3165 8.4% _malloc_zone_malloc
- 3161 99.9% LazyCompile: *pbkdf2 crypto.js:557:16
- 3161 100.0% LazyCompile: *exports.pbkdf2Sync crypto.js:552:30
-```
-
-Parsing this section takes a little more work than the raw tick counts above.
-Within each of the "call stacks" above, the percentage in the parent column
-tells you the percentage of samples for which the function in the row above was
-called by the function in the current row. For example, in the middle "call
-stack" above for _sha1_block_data_order, we see that `_sha1_block_data_order` occurred
-in 11.9% of samples, which we knew from the raw counts above. However, here, we
-can also tell that it was always called by the pbkdf2 function inside the
-Node.js crypto module. We see that similarly, `_malloc_zone_malloc` was called
-almost exclusively by the same pbkdf2 function. Thus, using the information in
-this view, we can tell that our hash computation from the user's password
-accounts not only for the 51.8% from above but also for all CPU time in the top
-3 most sampled functions since the calls to `_sha1_block_data_order` and
-`_malloc_zone_malloc` were made on behalf of the pbkdf2 function.
-
-At this point, it is very clear that the password based hash generation should
-be the target of our optimization. Thankfully, you've fully internalized the
-[benefits of asynchronous programming][] and you realize that the work to
-generate a hash from the user's password is being done in a synchronous way and
-thus tying down the event loop. This prevents us from working on other incoming
-requests while computing a hash.
-
-To remedy this issue, you make a small modification to the above handlers to use
-the asynchronous version of the pbkdf2 function:
-
-```javascript
-app.get('/auth', (req, res) => {
- let username = req.query.username || '';
- const password = req.query.password || '';
-
- username = username.replace(/[!@#$%^&*]/g, '');
-
- if (!username || !password || !users[username]) {
- return res.sendStatus(400);
- }
-
- crypto.pbkdf2(password, users[username].salt, 10000, 512, (err, hash) => {
- if (users[username].hash.toString() === hash.toString()) {
- res.sendStatus(200);
- } else {
- res.sendStatus(401);
- }
- });
-});
-```
-
-A new run of the ab benchmark above with the asynchronous version of your app
-yields:
-
-```
-Concurrency Level: 20
-Time taken for tests: 12.846 seconds
-Complete requests: 250
-Failed requests: 0
-Keep-Alive requests: 250
-Total transferred: 50250 bytes
-HTML transferred: 500 bytes
-Requests per second: 19.46 [#/sec] (mean)
-Time per request: 1027.689 [ms] (mean)
-Time per request: 51.384 [ms] (mean, across all concurrent requests)
-Transfer rate: 3.82 [Kbytes/sec] received
-
-...
-
-Percentage of the requests served within a certain time (ms)
- 50% 1018
- 66% 1035
- 75% 1041
- 80% 1043
- 90% 1049
- 95% 1063
- 98% 1070
- 99% 1071
- 100% 1079 (longest request)
-```
-
-Yay! Your app is now serving about 20 requests per second, roughly 4 times more
-than it was with the synchronous hash generation. Additionally, the average
-latency is down from the 4 seconds before to just over 1 second.
-
-Hopefully, through the performance investigation of this (admittedly contrived)
-example, you've seen how the V8 tick processor can help you gain a better
-understanding of the performance of your Node.js applications.
-
-[profiler inside V8]: https://developers.google.com/v8/profiler_example
-[benefits of asynchronous programming]: https://nodesource.com/blog/why-asynchronous
diff --git a/locale/uk/docs/guides/working-with-different-filesystems.md b/locale/uk/docs/guides/working-with-different-filesystems.md
deleted file mode 100644
index 36494b9bb1642..0000000000000
--- a/locale/uk/docs/guides/working-with-different-filesystems.md
+++ /dev/null
@@ -1,224 +0,0 @@
----
-title: Working with Different Filesystems
-layout: docs.hbs
----
-
-# Working with Different Filesystems
-
-Node exposes many features of the filesystem. But not all filesystems are alike.
-The following are suggested best practices to keep your code simple and safe
-when working with different filesystems.
-
-## Filesystem Behavior
-
-Before you can work with a filesystem, you need to know how it behaves.
-Different filesystems behave differently and have more or less features than
-others: case sensitivity, case insensitivity, case preservation, Unicode form
-preservation, timestamp resolution, extended attributes, inodes, Unix
-permissions, alternate data streams etc.
-
-Be wary of inferring filesystem behavior from `process.platform`. For example,
-do not assume that because your program is running on Darwin that you are
-therefore working on a case-insensitive filesystem (HFS+), as the user may be
-using a case-sensitive filesystem (HFSX). Similarly, do not assume that because
-your program is running on Linux that you are therefore working on a filesystem
-which supports Unix permissions and inodes, as you may be on a particular
-external drive, USB or network drive which does not.
-
-The operating system may not make it easy to infer filesystem behavior, but all
-is not lost. Instead of keeping a list of every known filesystem and behavior
-(which is always going to be incomplete), you can probe the filesystem to see
-how it actually behaves. The presence or absence of certain features which are
-easy to probe, are often enough to infer the behavior of other features which
-are more difficult to probe.
-
-Remember that some users may have different filesystems mounted at various paths
-in the working tree.
-
-## Avoid a Lowest Common Denominator Approach
-
-You might be tempted to make your program act like a lowest common denominator
-filesystem, by normalizing all filenames to uppercase, normalizing all filenames
-to NFC Unicode form, and normalizing all file timestamps to say 1-second
-resolution. This would be the lowest common denominator approach.
-
-Do not do this. You would only be able to interact safely with a filesystem
-which has the exact same lowest common denominator characteristics in every
-respect. You would be unable to work with more advanced filesystems in the way
-that users expect, and you would run into filename or timestamp collisions. You
-would most certainly lose and corrupt user data through a series of complicated
-dependent events, and you would create bugs that would be difficult if not
-impossible to solve.
-
-What happens when you later need to support a filesystem that only has 2-second
-or 24-hour timestamp resolution? What happens when the Unicode standard advances
-to include a slightly different normalization algorithm (as has happened in the
-past)?
-
-A lowest common denominator approach would tend to try to create a portable
-program by using only "portable" system calls. This leads to programs that are
-leaky and not in fact portable.
-
-## Adopt a Superset Approach
-
-Make the best use of each platform you support by adopting a superset approach.
-For example, a portable backup program should sync btimes (the created time of a
-file or folder) correctly between Windows systems, and should not destroy or
-alter btimes, even though btimes are not supported on Linux systems. The same
-portable backup program should sync Unix permissions correctly between Linux
-systems, and should not destroy or alter Unix permissions, even though Unix
-permissions are not supported on Windows systems.
-
-Handle different filesystems by making your program act like a more advanced
-filesystem. Support a superset of all possible features: case-sensitivity,
-case-preservation, Unicode form sensitivity, Unicode form preservation, Unix
-permissions, high-resolution nanosecond timestamps, extended attributes etc.
-
-Once you have case-preservation in your program, you can always implement
-case-insensitivity if you need to interact with a case-insensitive filesystem.
-But if you forego case-preservation in your program, you cannot interact safely
-with a case-preserving filesystem. The same is true for Unicode form
-preservation and timestamp resolution preservation.
-
-If a filesystem provides you with a filename in a mix of lowercase and
-uppercase, then keep the filename in the exact case given. If a filesystem
-provides you with a filename in mixed Unicode form or NFC or NFD (or NFKC or
-NFKD), then keep the filename in the exact byte sequence given. If a filesystem
-provides you with a millisecond timestamp, then keep the timestamp in
-millisecond resolution.
-
-When you work with a lesser filesystem, you can always downsample appropriately,
-with comparison functions as required by the behavior of the filesystem on which
-your program is running. If you know that the filesystem does not support Unix
-permissions, then you should not expect to read the same Unix permissions you
-write. If you know that the filesystem does not preserve case, then you should
-be prepared to see `ABC` in a directory listing when your program creates `abc`.
-But if you know that the filesystem does preserve case, then you should consider
-`ABC` to be a different filename to `abc`, when detecting file renames or if the
-filesystem is case-sensitive.
-
-## Case Preservation
-
-You may create a directory called `test/abc` and be surprised to see sometimes
-that `fs.readdir('test')` returns `['ABC']`. This is not a bug in Node. Node
-returns the filename as the filesystem stores it, and not all filesystems
-support case-preservation. Some filesystems convert all filenames to uppercase
-(or lowercase).
-
-## Unicode Form Preservation
-
-*Case preservation and Unicode form preservation are similar concepts. To
-understand why Unicode form should be preserved , make sure that you first
-understand why case should be preserved. Unicode form preservation is just as
-simple when understood correctly.*
-
-Unicode can encode the same characters using several different byte sequences.
-Several strings may look the same, but have different byte sequences. When
-working with UTF-8 strings, be careful that your expectations are in line with
-how Unicode works. Just as you would not expect all UTF-8 characters to encode
-to a single byte, you should not expect several UTF-8 strings that look the same
-to the human eye to have the same byte representation. This may be an
-expectation that you can have of ASCII, but not of UTF-8.
-
-You may create a directory called `test/café` (NFC Unicode form with byte
-sequence `<63 61 66 c3 a9>` and `string.length === 5`) and be surprised to see
-sometimes that `fs.readdir('test')` returns `['café']` (NFD Unicode form with
-byte sequence `<63 61 66 65 cc 81>` and `string.length === 6`). This is not a
-bug in Node. Node returns the filename as the filesystem stores it, and not all
-filesystems support Unicode form preservation.
-
-HFS+, for example, will normalize all filenames to a form almost always the same
-as NFD form. Do not expect HFS+ to behave the same as NTFS or EXT4 and
-vice-versa. Do not try to change data permanently through normalization as a
-leaky abstraction to paper over Unicode differences between filesystems. This
-would create problems without solving any. Rather, preserve Unicode form and use
-normalization as a comparison function only.
-
-## Unicode Form Insensitivity
-
-Unicode form insensitivity and Unicode form preservation are two different
-filesystem behaviors often mistaken for each other. Just as case-insensitivity
-has sometimes been incorrectly implemented by permanently normalizing filenames
-to uppercase when storing and transmitting filenames, so Unicode form
-insensitivity has sometimes been incorrectly implemented by permanently
-normalizing filenames to a certain Unicode form (NFD in the case of HFS+) when
-storing and transmitting filenames. It is possible and much better to implement
-Unicode form insensitivity without sacrificing Unicode form preservation, by
-using Unicode normalization for comparison only.
-
-## Comparing Different Unicode Forms
-
-Node provides `string.normalize('NFC' / 'NFD')` which you can use to normalize a
-UTF-8 string to either NFC or NFD. You should never store the output from this
-function but only use it as part of a comparison function to test whether two
-UTF-8 strings would look the same to the user.
-
-You can use `string1.normalize('NFC') === string2.normalize('NFC')` or
-`string1.normalize('NFD') === string2.normalize('NFD')` as your comparison
-function. Which form you use does not matter.
-
-Normalization is fast but you may want to use a cache as input to your
-comparison function to avoid normalizing the same string many times over. If the
-string is not present in the cache then normalize it and cache it. Be careful
-not to store or persist the cache, use it only as a cache.
-
-Note that using `normalize()` requires that your version of Node include ICU
-(otherwise `normalize()` will just return the original string). If you download
-the latest version of Node from the website then it will include ICU.
-
-## Timestamp Resolution
-
-You may set the `mtime` (the modified time) of a file to `1444291759414`
-(millisecond resolution) and be surprised to see sometimes that `fs.stat`
-returns the new mtime as `1444291759000` (1-second resolution) or
-`1444291758000` (2-second resolution). This is not a bug in Node. Node returns
-the timestamp as the filesystem stores it, and not all filesystems support
-nanosecond, millisecond or 1-second timestamp resolution. Some filesystems even
-have very coarse resolution for the atime timestamp in particular, e.g. 24 hours
-for some FAT filesystems.
-
-## Do Not Corrupt Filenames and Timestamps Through Normalization
-
-Filenames and timestamps are user data. Just as you would never automatically
-rewrite user file data to uppercase the data or normalize `CRLF` to `LF`
-line-endings, so you should never change, interfere or corrupt filenames or
-timestamps through case / Unicode form / timestamp normalization. Normalization
-should only ever be used for comparison, never for altering data.
-
-Normalization is effectively a lossy hash code. You can use it to test for
-certain kinds of equivalence (e.g. do several strings look the same even though
-they have different byte sequences) but you can never use it as a substitute for
-the actual data. Your program should pass on filename and timestamp data as is.
-
-Your program can create new data in NFC (or in any combination of Unicode form
-it prefers) or with a lowercase or uppercase filename, or with a 2-second
-resolution timestamp, but your program should not corrupt existing user data by
-imposing case / Unicode form / timestamp normalization. Rather, adopt a superset
-approach and preserve case, Unicode form and timestamp resolution in your
-program. That way, you will be able to interact safely with filesystems which do
-the same.
-
-## Use Normalization Comparison Functions Appropriately
-
-Make sure that you use case / Unicode form / timestamp comparison functions
-appropriately. Do not use a case-insensitive filename comparison function if you
-are working on a case-sensitive filesystem. Do not use a Unicode form
-insensitive comparison function if you are working on a Unicode form sensitive
-filesystem (e.g. NTFS and most Linux filesystems which preserve both NFC and NFD
-or mixed Unicode forms). Do not compare timestamps at 2-second resolution if you
-are working on a nanosecond timestamp resolution filesystem.
-
-## Be Prepared for Slight Differences in Comparison Functions
-
-Be careful that your comparison functions match those of the filesystem (or
-probe the filesystem if possible to see how it would actually compare).
-Case-insensitivity for example is more complex than a simple `toLowerCase()`
-comparison. In fact, `toUpperCase()` is usually better than `toLowerCase()`
-(since it handles certain foreign language characters differently). But better
-still would be to probe the filesystem since every filesystem has its own case
-comparison table baked in.
-
-As an example, Apple's HFS+ normalizes filenames to NFD form but this NFD form
-is actually an older version of the current NFD form and may sometimes be
-slightly different from the latest Unicode standard's NFD form. Do not expect
-HFS+ NFD to be exactly the same as Unicode NFD all the time.
diff --git a/locale/uk/docs/index.md b/locale/uk/docs/index.md
index 8dbd8457bc202..a666af640e83d 100644
--- a/locale/uk/docs/index.md
+++ b/locale/uk/docs/index.md
@@ -11,6 +11,7 @@ labels:
* довідкова документація про API;
* функціонал ES6;
+* посібники
### Довідкова документація про API
@@ -37,3 +38,7 @@ labels:
### Функціонал ES6
[Секція ES6](/en/docs/es6/) описує три групи функціоналу ES6 і описує який функціонал наразі доступний у Node.js за замовчуванням разом з пояснювальними посиланнями. Вона також показує звідки дізнатись яка версія V8 постачається з певним релізом Node.js.
+
+### Посібники
+
+[Розділ посібників](/en/docs/guides/) містить великі та ретельні статті про технічні особливості та можливості Node.js.