-
Notifications
You must be signed in to change notification settings - Fork 34
Description
Scenario: A container with non-blocking mode outputs a log of 256KiB + 1byte (262145bytes).
{
"containerDefinitions": [
{
"name": "app",
"image": "amazonlinux:2",
"cpu": 0,
"portMappings": [],
"essential": true,
"command": [
"bash",
"-c",
"printf \"$(yes A | tr -d \"\\n\" | head -c 262145)\""
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/test-awslogs",
"mode": "non-blocking",
"awslogs-create-group": "true",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
(snip)
}
Actual Behavior
Currently, the log is split into three log events.
The sizes of these log events are 262,118 bytes, 26 bytes, and 1 byte, respectively.
This behavior is only observed on Fargate with non-blocking mode.
Expected Behavior
The expected behavior is for the log to be split into two log events.
The sizes of these log events should be 262,118 bytes and 27 bytes.
This expected behavior is observed in the following cases:
- Fargate with blocking mode
- EC2 launch type (Docker) with both non-blocking and blocking modes.
Cause
The non-blocking mode logger should use maximumBytesPerEvent
instead of defaultAwsBufSizeInBytes
.
// There are 26 bytes additional bytes for each log event:
// See more details in: http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html
perEventBytes = 26
// The value of maximumBytesPerEvent is adopted from Docker. Reference:
// https://github.com/moby/moby/blob/19.03/daemon/logger/awslogs/cloudwatchlogs.go#L58
maximumBytesPerEvent = 262144 - perEventBytes
// The max size of CloudWatch events is 256kb.
defaultAwsBufSizeInBytes = 256 * 1024
(snip)
if la.globalArgs.Mode == logger.NonBlockingMode {
debug.SendEventsToLog(logger.DaemonName, "Starting log streaming for non-blocking mode awslogs driver",
debug.INFO, 0)
l = logger.NewBufferedLogger(l, defaultAwsBufSizeInBytes, la.globalArgs.MaxBufferSize, la.globalArgs.ContainerID)
}
Details
In blocking mode (common logger), the buffer size is set to maximumBytesPerEvent
.
However, in non-blocking mode (buffered logger), the buffer size is overridden to defaultAwsBufSizeInBytes
in the bufferedLogger.Read
method.
The buffer size above corresponds to the buf
variable in the following code:
func (l *Logger) Read(
ctx context.Context,
pipe io.Reader,
source string,
bufferSizeInBytes int,
sendLogMsgToDest sendLogToDestFunc,
) error {
var (
msgTimestamp time.Time
bytesInBuffer int
err error
eof bool
)
// Initiate an in-memory buffer to hold bytes read from container pipe.
buf := make([]byte, bufferSizeInBytes)
This causes the following differences:
blocking mode
- The shim logger reads 262,118 bytes (
maximumBytesPerEvent
) of data from the pipe into the buffer, filling the buffer. - The shim logger calls the
stream.Log
method (Docker implementation) to send the buffer data to CloudWatch Logs.- Docker's
Log
has a per-event buffer limit of 262,118 bytes (256KiB - 26 bytes). - Since Docker's limit matches
maximumBytesPerEvent
, the log data (262,118 bytes) is sent as a single log event.
- Docker's
- After clearing the buffer, the shim logger reads the remaining 27 bytes from the pipe into the buffer.
- Docker's
Log
sends the remaining 27 bytes as another log event.
Result: CloudWatch Logs contain two events: 262,118 bytes and 27 bytes.
// See: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_limits.html
// Because the events are interpreted as UTF-8 encoded Unicode, invalid UTF-8 byte sequences are replaced with the
// Unicode replacement character (U+FFFD), which is a 3-byte sequence in UTF-8. To compensate for that and to avoid
// splitting valid UTF-8 characters into invalid byte sequences, we calculate the length of each event assuming that
// this replacement happens.
maximumBytesPerEvent = 262144 - perEventBytes
(snip)
func (l *logStream) processEvent(batch *eventBatch, bytes []byte, timestamp int64) {
for len(bytes) > 0 {
// Split line length so it does not exceed the maximum
splitOffset, lineBytes := findValidSplit(string(bytes), maximumBytesPerEvent)
line := bytes[:splitOffset]
event := wrappedEvent{
inputLogEvent: types.InputLogEvent{
Message: aws.String(string(line)),
Timestamp: aws.Int64(timestamp),
},
non-blocking mode
- The shim logger reads 256KiB (
defaultAwsBufSizeInBytes
) of data from the pipe into into the buffer, filling the buffer. - The shim logger calls the
stream.Log
method (Docker implementation) to send the buffer data to CloudWatch Logs.- Docker's
Log
cannot send 256KiB as a single log event since Docker limit (maximumBytesPerEvent
) is less thandefaultAwsBufSizeInBytes
. - Docker splits the 256KiB data into two log events: 262,118 bytes (256KiB - 26 bytes) and 26 bytes.
- Docker's
- After clearing the buffer, the shim logger reads the remaining 1 byte from the pipe into the buffer.
- Docker's
Log
sends the 1 byte as another log event.
Result: CloudWatch Logs contain three events: 262,118 bytes, 26 bytes, and 1 byte.
The issue occurs when handling a large log larger than 256KiB, such as when using awslogs-multiline-pattern
. This is an edge case.