diff --git a/test/scrape_mdn_test.dart b/test/scrape_mdn_test.dart new file mode 100644 index 00000000..2cb99e9c --- /dev/null +++ b/test/scrape_mdn_test.dart @@ -0,0 +1,102 @@ +// Copyright (c) 2024, the Dart project authors. Please see the AUTHORS file +// for details. All rights reserved. Use of this source code is governed by a +// BSD-style license that can be found in the LICENSE file. + +import 'package:test/test.dart'; + +import '../tool/scrape_mdn.dart'; + +void main() { + group('convertMdnToMarkdown', () { + test('simple', () { + compare(''' +Hello world +''', ''' +Hello world +'''); + }); + + test('removes front matter', () { + compare(''' +--- +title: AudioNode +slug: Web/API/AudioNode +page-type: web-api-interface +browser-compat: api.AudioNode +--- + +Hello world +''', ''' +Hello world +'''); + }); + + test('strips InheritanceDiagram', () { + compare(''' +Hello world + +{{InheritanceDiagram}} + +foo bar +''', ''' +Hello world + +foo bar +'''); + }); + + test('reference domxref', () { + compare(''' +Examples include: + +- the audio destination, +- intermediate processing module (e.g. a filter like {{domxref("BiquadFilterNode")}} or {{domxref("ConvolverNode")}}), or +- volume control (like {{domxref("GainNode")}}) +''', ''' +Examples include: + +- the audio destination, +- intermediate processing module (e.g. a filter like [BiquadFilterNode] or [ConvolverNode]), or +- volume control (like [GainNode]) +'''); + + compare(''' +Examples include: + +... of {{domxref("Response.type", "type")}} ... +''', ''' +Examples include: + +... of [Response.type] ... +'''); + + compare(''' +The **`sampleRate`** property of the {{ + domxref("AudioBuffer") }} interface returns a float representing the sample rate, in +samples per second, of the PCM data stored in the buffer. +''', ''' +The **`sampleRate`** property of the [AudioBuffer] interface returns a float representing the sample rate, in +samples per second, of the PCM data stored in the buffer. +'''); + }); + + test('reference jsxref', () { + compare(''' +... or functions such as {{jsxref("Array.forEach", "forEach()")}}. + +- {{jsxref("JSON.parse()")}} - counterpart for {{jsxref("JSON")}} documents. +''', ''' +... or functions such as `forEach()`. + +- `JSON.parse()` - counterpart for `JSON` documents. +'''); + }); + }); +} + +void compare(String source, String output) { + expect( + convertMdnToMarkdown(source), + output.trimRight(), + ); +} diff --git a/tool/mdn.json b/tool/mdn.json new file mode 100644 index 00000000..0701117b --- /dev/null +++ b/tool/mdn.json @@ -0,0 +1,9250 @@ +{ + "__meta__": { + "source": "[MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web)", + "license": "[CC-BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/)" + }, + "abortcontroller": { + "docs": "\n\nThe **`AbortController`** interface represents a controller object that allows you to abort one or more Web requests as and when desired.\n\nYou can create a new `AbortController` object using the [AbortController.AbortController] constructor. Communicating with a DOM request is done using an [AbortSignal] object.", + "properties": { + "abort": "\n\nThe **`abort()`** method of the [AbortController] interface aborts a DOM request before it has completed.\nThis is able to abort [fetch requests](/en-US/docs/Web/API/fetch), the consumption of any response bodies, or streams.", + "signal": "\n\nThe **`signal`** read-only property of the [AbortController] interface returns an [AbortSignal] object instance, which can be used to communicate with/abort a DOM request as desired." + } + }, + "abortsignal": { + "docs": "\n\nThe **`AbortSignal`** interface represents a signal object that allows you to communicate with a DOM request (such as a fetch request) and abort it if required via an [AbortController] object.\n\n", + "properties": { + "abort_event": "\n\nThe **`abort`** event of the [AbortSignal] is fired when the associated request is aborted, i.e. using [AbortController.abort].", + "abort_static": "\n\nThe **`AbortSignal.abort()`** static method returns an [AbortSignal] that is already set as aborted (and which does not trigger an [AbortSignal/abort_event] event).\n\nThis is shorthand for the following code:\n\n```js\nconst controller = new AbortController();\ncontroller.abort();\nreturn controller.signal;\n```\n\nThis could, for example, be passed to a fetch method in order to run its abort logic (i.e. it may be that code is organized such that the abort logic should be run even if the intended fetch operation has not been started).\n\n> **Note:** The method is similar in purpose to `Promise.reject`.", + "aborted": "\n\nThe **`aborted`** read-only property returns a value that indicates whether the DOM requests the signal is communicating with are aborted (`true`) or not (`false`).", + "any_static": "\n\nThe **`AbortSignal.any()`** static method takes an iterable of abort signals and returns an [AbortSignal]. The returned abort signal is aborted when any of the input iterable abort signals are aborted. The [AbortSignal.reason] will be set to the reason of the first signal that is aborted. If any of the the given abort signals are already aborted then so will be the returned [AbortSignal].", + "reason": "\n\nThe **`reason`** read-only property returns a JavaScript value that indicates the abort reason.\n\nThe property is `undefined` when the signal has not been aborted.\nIt can be set to a specific value when the signal is aborted, using [AbortController.abort] or [AbortSignal/abort_static].\nIf not explicitly set in those methods, it defaults to \"AbortError\" [DOMException].", + "throwifaborted": "\n\nThe **`throwIfAborted()`** method throws the signal's abort [AbortSignal.reason] if the signal has been aborted; otherwise it does nothing.\n\nAn API that needs to support aborting can accept an [AbortSignal] object and use `throwIfAborted()` to test and throw when the [`abort`](/en-US/docs/Web/API/AbortSignal/abort_event) event is signalled.\n\nThis method can also be used to abort operations at particular points in code, rather than passing to functions that take a signal.", + "timeout_static": "\n\nThe **`AbortSignal.timeout()`** static method returns an [AbortSignal] that will automatically abort after a specified time.\n\nThe signal aborts with a `TimeoutError` [DOMException] on timeout, or with `AbortError` [DOMException] due to pressing a browser stop button (or some other inbuilt \"stop\" operation).\nThis allows UIs to differentiate timeout errors, which typically require user notification, from user-triggered aborts that do not.\n\nThe timeout is based on active rather than elapsed time, and will effectively be paused if the code is running in a suspended worker, or while the document is in a back-forward cache (\"[bfcache](https://web.dev/articles/bfcache)\").\n\nTo combine multiple signals, you can use [AbortSignal/any_static], for example, to directly abort a download using either a timeout signal or by calling [AbortController.abort]." + } + }, + "absoluteorientationsensor": { + "docs": "\n\nThe **`AbsoluteOrientationSensor`** interface of the [Sensor APIs](/en-US/docs/Web/API/Sensor_APIs) describes the device's physical orientation in relation to the Earth's reference coordinate system.\n\nTo use this sensor, the user must grant permission to the `'accelerometer'`, `'gyroscope'`, and `'magnetometer'` device sensors through the [Permissions API](/en-US/docs/Web/API/Permissions_API).\n\nThis feature may be blocked by a [Permissions Policy](/en-US/docs/Web/HTTP/Permissions_Policy) set on your server.\n\n" + }, + "abstractrange": { + "docs": "\n\nThe **`AbstractRange`** abstract interface is the base class upon which all range types are defined. A **range** is an object that indicates the start and end points of a section of content within the document.\n\n> **Note:** As an abstract interface, you will not directly instantiate an object of type `AbstractRange`. Instead, you will use the [Range] or [StaticRange] interfaces. To understand the difference between those two interfaces, and how to choose which is appropriate for your needs, consult each interface's documentation.\n\n", + "properties": { + "collapsed": "\n\nThe read-only **`collapsed`** property of the [AbstractRange] interface returns `true` if the range's start position and end position are the same.", + "endcontainer": "\n\nThe read-only **`endContainer`** property of the [AbstractRange] interface returns the [Node] in which the end of the range is located.", + "endoffset": "\n\nThe **`endOffset`** property of the [AbstractRange] interface returns the offset into the end node of the range's end position.", + "startcontainer": "\n\nThe read-only **`startContainer`** property of the [AbstractRange] interface returns the start [Node] for the range.", + "startoffset": "\n\nThe read-only **`startOffset`** property of the [AbstractRange] interface returns the offset into the start node of the range's start position." + } + }, + "accelerometer": { + "docs": "\n\nThe **`Accelerometer`** interface of the [Sensor APIs](/en-US/docs/Web/API/Sensor_APIs) provides on each reading the acceleration applied to the device along all three axes.\n\nTo use this sensor, the user must grant permission to the `'accelerometer'`, device sensor through the [Permissions API](/en-US/docs/Web/API/Permissions_API).\n\nThis feature may be blocked by a [Permissions Policy](/en-US/docs/Web/HTTP/Permissions_Policy) set on your server.\n\n", + "properties": { + "x": "\n\nThe **`x`** read-only property of the [Accelerometer] interface returns a number specifying the acceleration of the device along its x-axis.", + "y": "\n\nThe **`y`** read-only property of the [Accelerometer] interface returns a number specifying the acceleration of the device along its y-axis.", + "z": "\n\nThe **`z`** read-only property of the [Accelerometer] interface returns a number specifying the acceleration of the device along its z-axis." + } + }, + "aescbcparams": { + "docs": "\n\nThe **`AesCbcParams`** dictionary of the [Web Crypto API](/en-US/docs/Web/API/Web_Crypto_API) represents the object that should be passed as the `algorithm` parameter into [SubtleCrypto.encrypt], [SubtleCrypto.decrypt], [SubtleCrypto.wrapKey], or [SubtleCrypto.unwrapKey], when using the [AES-CBC](/en-US/docs/Web/API/SubtleCrypto/encrypt#aes-cbc) algorithm." + }, + "aesctrparams": { + "docs": "\n\nThe **`AesCtrParams`** dictionary of the [Web Crypto API](/en-US/docs/Web/API/Web_Crypto_API) represents the object that should be passed as the `algorithm` parameter into [SubtleCrypto.encrypt], [SubtleCrypto.decrypt], [SubtleCrypto.wrapKey], or [SubtleCrypto.unwrapKey], when using the [AES-CTR](/en-US/docs/Web/API/SubtleCrypto/encrypt#aes-ctr) algorithm.\n\nAES is a block cipher, meaning that it splits the message into blocks and encrypts it a block at a time. In CTR mode, every time a block of the message is encrypted, an extra block of data is mixed in. This extra block is called the \"counter block\".\n\nA given counter block value must never be used more than once with the same key:\n\n- Given a message _n_ blocks long, a different counter block must be used for every block.\n- If the same key is used to encrypt more than one message, a different counter block must be used for all blocks across all messages.\n\nTypically this is achieved by splitting the initial counter block value into two concatenated parts:\n\n- A [nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) (that is, a number that may only be used once). The nonce part of the block stays the same for every block in the message. Each time a new message is to be encrypted, a new nonce is chosen. Nonces don't have to be secret, but they must not be reused with the same key.\n- A counter. This part of the block gets incremented each time a block is encrypted.\n\nEssentially: the nonce should ensure that counter blocks are not reused from one message to the next, while the counter should ensure that counter blocks are not reused within a single message.\n\n> **Note:** See [Appendix B of the NIST SP800-38A standard](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38a.pdf#%5B%7B%22num%22%3A70%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22Fit%22%7D%5D) for more information." + }, + "aesgcmparams": { + "docs": "\n\nThe **`AesGcmParams`** dictionary of the [Web Crypto API](/en-US/docs/Web/API/Web_Crypto_API) represents the object that should be passed as the `algorithm` parameter into [SubtleCrypto.encrypt], [SubtleCrypto.decrypt], [SubtleCrypto.wrapKey], or [SubtleCrypto.unwrapKey], when using the [AES-GCM](/en-US/docs/Web/API/SubtleCrypto/encrypt#aes-gcm) algorithm.\n\nFor details of how to supply appropriate values for this parameter, see the specification for AES-GCM: [NIST SP800-38D](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38d.pdf), in particular section 5.2.1.1 on Input Data." + }, + "aeskeygenparams": { + "docs": "\n\nThe **`AesKeyGenParams`** dictionary of the [Web Crypto API](/en-US/docs/Web/API/Web_Crypto_API) represents the object that should be passed as the `algorithm` parameter into [SubtleCrypto.generateKey], when generating an AES key: that is, when the algorithm is identified as any of [AES-CBC](/en-US/docs/Web/API/SubtleCrypto/encrypt#aes-cbc), [AES-CTR](/en-US/docs/Web/API/SubtleCrypto/encrypt#aes-ctr), [AES-GCM](/en-US/docs/Web/API/SubtleCrypto/encrypt#aes-gcm), or [AES-KW](/en-US/docs/Web/API/SubtleCrypto/wrapKey#aes-kw)." + }, + "ambientlightsensor": { + "docs": "\n\nThe **`AmbientLightSensor`** interface of the [Sensor APIs](/en-US/docs/Web/API/Sensor_APIs) returns the current light level or illuminance of the ambient light around the hosting device.\n\nTo use this sensor, the user must grant permission to the `'ambient-light-sensor'` device sensor through the [Permissions API](/en-US/docs/Web/API/Permissions_API).\n\nThis feature may be blocked by a [Permissions Policy](/en-US/docs/Web/HTTP/Permissions_Policy) set on your server.\n\n", + "properties": { + "illuminance": "\n\nThe **`illuminance`** property of the [AmbientLightSensor] interface returns the current light level in [lux](https://en.wikipedia.org/wiki/Lux) of the ambient light level around the hosting device." + } + }, + "analysernode": { + "docs": "\n\nThe **`AnalyserNode`** interface represents a node able to provide real-time frequency and time-domain analysis information. It is an [AudioNode] that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations.\n\nAn `AnalyserNode` has exactly one input and one output. The node works even if the output is not connected.\n\n![Without modifying the audio stream, the node allows to get the frequency and time-domain data associated to it, using a FFT.](fttaudiodata_en.svg)\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputs1
Number of outputs1 (but may be left unconnected)
Channel count mode\"max\"
Channel count2
Channel interpretation\"speakers\"
", + "properties": { + "fftsize": "\n\nThe **`fftSize`** property of the [AnalyserNode] interface is an unsigned long value and represents the window size in samples that is used when performing a [Fast Fourier Transform](https://en.wikipedia.org/wiki/Fast_Fourier_transform) (FFT) to get frequency domain data.", + "frequencybincount": "\n\nThe **`frequencyBinCount`** read-only property of the [AnalyserNode] interface contains the total number of data points available to [AudioContext] [BaseAudioContext.sampleRate]. This is half of the `value` of the [AnalyserNode.fftSize]. The two methods' indices have a linear relationship with the frequencies they represent, between 0 and the [Nyquist frequency](https://en.wikipedia.org/wiki/Nyquist_frequency).", + "getbytefrequencydata": "\n\nThe **`getByteFrequencyData()`** method of the [AnalyserNode] interface copies the current frequency data into a `Uint8Array` (unsigned byte array) passed into it.\n\nThe frequency data is composed of integers on a scale from 0 to 255.\n\nEach item in the array represents the decibel value for a specific frequency. The frequencies are spread linearly from 0 to 1/2 of the sample rate. For example, for `48000` sample rate, the last item of the array will represent the decibel value for `24000` Hz.\n\nIf the array has fewer elements than the [AnalyserNode.frequencyBinCount], excess elements are dropped. If it has more elements than needed, excess elements are ignored.", + "getbytetimedomaindata": "\n\nThe **`getByteTimeDomainData()`** method of the [AnalyserNode] Interface copies the current waveform, or time-domain, data into a `Uint8Array` (unsigned byte array) passed into it.\n\nIf the array has fewer elements than the [AnalyserNode.fftSize], excess elements are dropped. If it has more elements than needed, excess elements are ignored.", + "getfloatfrequencydata": "\n\nThe **`getFloatFrequencyData()`** method of the [AnalyserNode] Interface copies the current frequency data into a `Float32Array` array passed into it. Each array value is a _sample_, the magnitude of the signal at a particular time.\n\nEach item in the array represents the decibel value for a specific frequency. The frequencies are spread linearly from 0 to 1/2 of the sample rate. For example, for a `48000` Hz sample rate, the last item of the array will represent the decibel value for `24000` Hz.\n\nIf you need higher performance and don't care about precision, you can use [AnalyserNode.getByteFrequencyData] instead, which works on a `Uint8Array`.", + "getfloattimedomaindata": "\n\nThe **`getFloatTimeDomainData()`** method of the [AnalyserNode] Interface copies the current waveform, or time-domain, data into a `Float32Array` array passed into it. Each array value is a _sample_, the magnitude of the signal at a particular time.", + "maxdecibels": "\n\nThe **`maxDecibels`** property of the [AnalyserNode] interface is a double value representing the maximum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values — basically, this specifies the maximum value for the range of results when using `getByteFrequencyData()`.", + "mindecibels": "\n\nThe **`minDecibels`** property of the [AnalyserNode] interface is a double value representing the minimum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values — basically, this specifies the minimum value for the range of results when using `getByteFrequencyData()`.", + "smoothingtimeconstant": "\n\nThe **`smoothingTimeConstant`** property of the [AnalyserNode] interface is a double value representing the averaging constant with the last analysis frame. It's basically an average between the current buffer and the last buffer the `AnalyserNode` processed, and results in a much smoother set of value changes over time." + } + }, + "angle_instanced_arrays": { + "docs": "\n\nThe **`ANGLE_instanced_arrays`** extension is part of the [WebGL API](/en-US/docs/Web/API/WebGL_API) and allows to draw the same object, or groups of similar objects multiple times, if they share the same vertex data, primitive count and type.\n\nWebGL extensions are available using the [WebGLRenderingContext.getExtension] method. For more information, see also [Using Extensions](/en-US/docs/Web/API/WebGL_API/Using_Extensions) in the [WebGL tutorial](/en-US/docs/Web/API/WebGL_API/Tutorial).\n\n> **Note:** This extension is only available to [WebGLRenderingContext] contexts. In [WebGL2RenderingContext], the functionality of this extension is available on the WebGL2 context by default and the constants and methods are available without the \"`ANGLE`\" suffix.\n>\n> Despite the name \"ANGLE\", this extension works on any device if the hardware supports it and not just on Windows when using the ANGLE library. \"ANGLE\" just indicates that this extension has been written by the ANGLE library authors.", + "properties": { + "drawarraysinstancedangle": "\n\nThe **`ANGLE_instanced_arrays.drawArraysInstancedANGLE()`** method of the [WebGL API](/en-US/docs/Web/API/WebGL_API) renders primitives from array data like the [WebGLRenderingContext.drawArrays] method. In addition, it can execute multiple instances of the range of elements.\n\n> **Note:** When using [WebGL2RenderingContext], this method is available as [WebGL2RenderingContext.drawArraysInstanced] by default.", + "drawelementsinstancedangle": "\n\nThe **`ANGLE_instanced_arrays.drawElementsInstancedANGLE()`** method of the [WebGL API](/en-US/docs/Web/API/WebGL_API) renders primitives from array data like the [WebGLRenderingContext.drawElements] method. In addition, it can execute multiple instances of a set of elements.\n\n> **Note:** When using [WebGL2RenderingContext], this method is available as [WebGL2RenderingContext.drawElementsInstanced] by default.", + "vertexattribdivisorangle": "\n\nThe **ANGLE_instanced_arrays.vertexAttribDivisorANGLE()** method of the [WebGL API](/en-US/docs/Web/API/WebGL_API) modifies the rate at which generic vertex attributes advance when rendering multiple instances of primitives with [ANGLE_instanced_arrays.drawArraysInstancedANGLE] and [ANGLE_instanced_arrays.drawElementsInstancedANGLE].\n\n> **Note:** When using [WebGL2RenderingContext], this method is available as [WebGL2RenderingContext.vertexAttribDivisor] by default." + } + }, + "animation": { + "docs": "\n\nThe **`Animation`** interface of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) represents a single animation player and provides playback controls and a timeline for an animation node or source.\n\n", + "properties": { + "cancel": "\n\nThe Web Animations API's **`cancel()`** method of the [Animation] interface clears all [KeyframeEffect]s caused by this animation and aborts its playback.\n\n> **Note:** When an animation is cancelled, its [Animation.startTime] and [Animation.currentTime] are set to `null`.", + "cancel_event": "\n\nThe **`cancel`** event of the [Animation] interface is fired when the [Animation.cancel] method is called or when the animation enters the `\"idle\"` play state from another state, such as when the animation is removed from an element before it finishes playing.\n\n> **Note:** Creating a new animation that is initially idle does not trigger a `cancel` event on the new animation.", + "commitstyles": "\n\nThe `commitStyles()` method of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s [Animation] interface writes the [computed values](/en-US/docs/Web/CSS/computed_value) of the animation's current styles into its target element's [`style`](/en-US/docs/Web/HTML/Global_attributes#style) attribute. `commitStyles()` works even if the animation has been [automatically removed](/en-US/docs/Web/API/Web_Animations_API/Using_the_Web_Animations_API#automatically_removing_filling_animations).\n\n`commitStyles()` can be used in combination with `fill` to cause the final state of an animation to persist after the animation ends. The same effect could be achieved with `fill` alone, but [using indefinitely filling animations is discouraged](https://drafts.csswg.org/web-animations-1/#fill-behavior). Animations [take precedence over all static styles](/en-US/docs/Web/CSS/Cascade#cascading_order), so an indefinite filling animation can prevent the target element from ever being styled normally.\n\nUsing `commitStyles()` writes the styling state into the element's [`style`](/en-US/docs/Web/HTML/Global_attributes#style) attribute, where they can be modified and replaced as normal.", + "currenttime": "\n\nThe **`Animation.currentTime`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) returns and sets the current time value of the animation in milliseconds, whether running or paused.\n\nIf the animation lacks a [AnimationTimeline], is inactive, or hasn't been played yet, `currentTime`'s return value is `null`.", + "effect": "\n\nThe **`Animation.effect`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) gets and sets the target effect of an animation. The target effect may be either an effect object of a type based on [AnimationEffect], such as [KeyframeEffect], or `null`.", + "finish": "\n\nThe **`finish()`** method of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s [Animation] Interface sets the current playback time to the end of the animation corresponding to the current playback direction.\n\nThat is, if the animation is playing forward, it sets the playback time to the length of the animation sequence, and if the animation is playing in reverse (having had its [Animation.reverse] method called), it sets the playback time to 0.", + "finish_event": "\n\nThe **`finish`** event of the [Animation] interface is fired when the animation finishes playing, either when the animation completes naturally, or\nwhen the [Animation.finish] method is called to immediately cause the\nanimation to finish up.\n\n> **Note:** The `\"paused\"` play state supersedes the `\"finished\"` play\n> state; if the animation is both paused and finished, the `\"paused\"` state\n> is the one that will be reported. You can force the animation into the\n> `\"finished\"` state by setting its [Animation.startTime] to\n> `document.timeline.currentTime - (Animation.currentTime * Animation.playbackRate)`.", + "finished": "\n\nThe **`Animation.finished`** read-only property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) returns a `Promise` which resolves once the animation has finished playing.\n\n> **Note:** Every time the animation leaves the `finished` play state (that is, when it starts playing again), a new `Promise` is created for this property. The new `Promise` will resolve once the new animation sequence has completed.", + "id": "\n\nThe **`Animation.id`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) returns or sets a string used to identify the animation.", + "pause": "\n\nThe **`pause()`** method of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s [Animation] interface suspends playback of the animation.", + "pending": "\n\nThe read-only **`Animation.pending`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) indicates whether the animation is currently waiting for an asynchronous operation such as initiating playback or pausing a running animation.", + "persist": "\n\nThe `persist()` method of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s [Animation] interface explicitly persists an animation, preventing it from being [automatically removed](/en-US/docs/Web/API/Web_Animations_API/Using_the_Web_Animations_API#automatically_removing_filling_animations) when it is replaced by another animation.", + "play": "\n\nThe **`play()`** method of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s [Animation] Interface starts or resumes playing of an animation. If the animation is finished, calling `play()` restarts the animation, playing it from the beginning.", + "playbackrate": "\n\nThe **`Animation.playbackRate`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) returns or sets the playback rate of the animation.\n\nAnimations have a **playback rate** that provides a scaling factor from the rate of change of the animation's [DocumentTimeline] time values to the animation's current time. The playback rate is initially `1`.", + "playstate": "\n\nThe read-only **`Animation.playState`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) returns an enumerated value describing the playback state of an animation.", + "ready": "\n\nThe read-only **`Animation.ready`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) returns a `Promise` which resolves when the animation is ready to play. A new promise is created every time the animation enters the `\"pending\"` [play state](/en-US/docs/Web/API/Animation/playState) as well as when the animation is canceled, since in both of those scenarios, the animation is ready to be started again.\n\n> **Note:** Since the same `Promise` is used for both pending `play` and pending `pause` requests, authors are advised to check the state of the animation when the promise is resolved.", + "remove_event": "\n\nThe **`remove`** event of the [Animation] interface fires when the animation is [automatically removed](/en-US/docs/Web/API/Web_Animations_API/Using_the_Web_Animations_API#automatically_removing_filling_animations) by the browser.", + "replacestate": "\n\nThe read-only **`Animation.replaceState`** property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) indicates whether the animation has been [removed by the browser automatically](/en-US/docs/Web/API/Web_Animations_API/Using_the_Web_Animations_API#automatically_removing_filling_animations) after being replaced by another animation.", + "reverse": "\n\nThe **`Animation.reverse()`** method of the [Animation] Interface reverses the playback direction, meaning the animation ends at its beginning. If called on an unplayed animation, the whole animation is played backwards. If called on a paused animation, the animation will continue in reverse.", + "starttime": "\n\nThe **`Animation.startTime`** property of the [Animation] interface is a double-precision floating-point value which indicates the scheduled time when an animation's playback should begin.\n\nAn animation's **start time** is the time value of its [DocumentTimeline] when its target [KeyframeEffect] is scheduled to begin playback. An animation's **start time** is initially unresolved (meaning that it's `null` because it has no value).", + "timeline": "\n\nThe **`Animation.timeline`** property of the [Animation] interface returns or sets the [AnimationTimeline] associated with this animation. A timeline is a source of time values for synchronization purposes, and is an [AnimationTimeline]-based object. By default, the animation's timeline and the [Document]'s timeline are the same.", + "updateplaybackrate": "\n\nThe **`updatePlaybackRate()`** method of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s\n[Animation] Interface sets the speed of an animation after first\nsynchronizing its playback position.\n\nIn some cases, an animation may run on a separate thread or process and will continue\nupdating even while long-running JavaScript delays the main thread. In such a case,\nsetting the [Animation.playbackRate] on the animation\ndirectly may cause the animation's playback position to jump since its playback\nposition on the main thread may have drifted from the playback position where it is\ncurrently running.\n\n`updatePlaybackRate()` is an asynchronous method that sets the speed of an\nanimation after synchronizing with its current playback position, ensuring that the\nresulting change in speed does not produce a sharp jump. After calling\n`updatePlaybackRate()` the animation's [Animation.playbackRate] is _not_ immediately updated. It will be updated once the\nanimation's [Animation.ready] promise is resolved." + } + }, + "animationeffect": { + "docs": "\n\nThe `AnimationEffect` interface of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) is an interface representing animation effects.\n\n`AnimationEffect` is an abstract interface and so isn't directly instantiable. However, concrete interfaces such as [KeyframeEffect] inherit from it, and instances of these interfaces can be passed to [Animation] objects for playing, and may also be used by [CSS Animations](/en-US/docs/Web/CSS/CSS_animations) and [Transitions](/en-US/docs/Web/CSS/CSS_transitions).", + "properties": { + "getcomputedtiming": "\n\nThe `getComputedTiming()` method of the [AnimationEffect] interface returns the calculated timing properties for this animation effect.\n\n> **Note:** These values are comparable to the computed styles of an Element returned using `window.getComputedStyle(elem)`.", + "gettiming": "\n\nThe `AnimationEffect.getTiming()` method of the [AnimationEffect] interface returns an object containing the timing properties for the Animation Effect.\n\n> **Note:** Several of the timing properties returned by `getTiming()` may take on the placeholder value `\"auto\"`. To obtain resolved values for use in timing computations, instead use [AnimationEffect.getComputedTiming].\n>\n> In the future, `\"auto\"` or similar values might be added to the types of more timing properties, and new types of [AnimationEffect] might resolve `\"auto\"` to different values.", + "updatetiming": "\n\nThe `updateTiming()` method of the [AnimationEffect] interface updates the specified timing properties for an animation effect." + } + }, + "animationevent": { + "docs": "\n\nThe **`AnimationEvent`** interface represents events providing information related to [animations](/en-US/docs/Web/CSS/CSS_animations/Using_CSS_animations).\n\n", + "properties": { + "animationname": "\n\nThe **`AnimationEvent.animationName`** read-only property is a\nstring containing the value of the CSS\nproperty associated with the transition.", + "elapsedtime": "\n\nThe **`AnimationEvent.elapsedTime`** read-only property is a\n`float` giving the amount of time the animation has been running, in seconds,\nwhen this event fired, excluding any time the animation was paused. For an\n[Element/animationstart_event] event,\n`elapsedTime` is `0.0` unless there was a negative value for\n, in which case the event will be fired with\n`elapsedTime` containing `(-1 * delay)`.", + "pseudoelement": "\n\nThe **`AnimationEvent.pseudoElement`** read-only property is a\nstring, starting with `'::'`, containing the name of the [pseudo-element](/en-US/docs/Web/CSS/Pseudo-elements) the animation runs on.\nIf the animation doesn't run on a pseudo-element but on the element, an empty string: `''`." + } + }, + "animationplaybackevent": { + "docs": "\n\nThe AnimationPlaybackEvent interface of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) represents animation events.\n\nAs animations play, they report changes to their [Animation.playState] through animation events.\n\n", + "properties": { + "currenttime": "\n\nThe **`currentTime`** read-only property of the [AnimationPlaybackEvent] interface represents the current time of the animation that generated the event at the moment the event is queued. This will be unresolved if the animation was `idle` at the time the event was generated.", + "timelinetime": "\n\nThe **`timelineTime`** read-only property of the [AnimationPlaybackEvent] interface represents the time value of the animation's [AnimationTimeline] at the moment the event is queued. This will be unresolved if the animation was not associated with a timeline at the time the event was generated or if the associated timeline was inactive." + } + }, + "animationtimeline": { + "docs": "\n\nThe `AnimationTimeline` interface of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API) represents the timeline of an animation. This interface exists to define timeline features, inherited by other timeline types:\n\n- [DocumentTimeline]\n- [ScrollTimeline]\n- [ViewTimeline]", + "properties": { + "currenttime": "\n\nThe **`currentTime`** read-only property of the [Web Animations API](/en-US/docs/Web/API/Web_Animations_API)'s [AnimationTimeline] interface returns the timeline's current time in milliseconds, or `null` if the timeline is inactive." + } + }, + "attr": { + "docs": "\n\nThe **`Attr`** interface represents one of an element's attributes as an object. In most situations, you will directly retrieve the attribute value as a string (e.g., [Element.getAttribute]), but certain functions (e.g., [Element.getAttributeNode]) or means of iterating return `Attr` instances.\n\nThe core idea of an object of type `Attr` is the association between a _name_ and a _value_. An attribute may also be part of a _namespace_ and, in this case, it also has a URI identifying the namespace, and a prefix that is an abbreviation for the namespace.\n\nThe name is deemed _local_ when it ignores the eventual namespace prefix and deemed _qualified_ when it includes the prefix of the namespace, if any, separated from the local name by a colon (`:`). We have three cases: an attribute outside of a namespace, an attribute inside a namespace without a prefix defined, an attribute inside a namespace with a prefix:\n\n| Attribute | Namespace name | Namespace prefix | Attribute local name | Attribute qualified name |\n| --------- | -------------- | ---------------- | -------------------- | ------------------------ |\n| `myAttr` | _none_ | _none_ | `myAttr` | `myAttr` |\n| `myAttr` | `mynamespace` | _none_ | `myAttr` | `myAttr` |\n| `myAttr` | `mynamespace` | `myns` | `myAttr` | `myns:myAttr` |\n\n> **Note:** This interface represents only attributes present in the tree representation of the [Element], being a SVG, an HTML or a MathML element. It doesn't represent the _property_ of an interface associated with such element, such as [HTMLTableElement] for a `table` element. (See for more information about attributes and how they are _reflected_ into properties.)", + "properties": { + "localname": "\n\nThe read-only **`localName`** property of the [Attr] interface returns the _local part_ of the _qualified name_ of an attribute, that is the name of the attribute, stripped from any namespace in front of it. For example, if the qualified name is `xml:lang`, the returned local name is `lang`, if the element supports that namespace.\n\nThe local name is always in lower case, whatever case at the attribute creation.\n\n> **Note:** HTML only supports a fixed set of namespaces on SVG and MathML elements. These are `xml` (for the `xml:lang` attribute), `xlink` (for the `xlink:href`, `xlink:show`, `xlink:target` and `xlink:title` attributes) and `xpath`.\n>\n> That means that the local name of an attribute of an HTML element is always be equal to its qualified name: Colons are treated as regular characters. In XML, like in SVG or MathML, the colon denotes the end of the prefix and what is before is the namespace; the local name may be different from the qualified name.", + "name": "\n\nThe read-only **`name`** property of the [Attr] interface returns the _qualified name_ of an attribute, that is the name of the attribute, with the namespace prefix, if any, in front of it. For example, if the local name is `lang` and the namespace prefix is `xml`, the returned qualified name is `xml:lang`.\n\nThe qualified name is always in lower case, whatever case at the attribute creation.", + "namespaceuri": "\n\nThe read-only **`namespaceURI`** property of the [Attr] interface returns the namespace URI of the attribute,\nor `null` if the element is not in a namespace.\n\nThe namespace URI is set at the [Attr] creation and cannot be changed.\nAn attribute with a namespace can be created using [Element.setAttributeNS].\n\n> **Note:** an attribute does not inherit its namespace from the element it is attached to.\n> If an attribute is not explicitly given a namespace, it has no namespace.\n\nThe browser does not handle or enforce namespace validation per se. It is up to the JavaScript\napplication to do any necessary validation. Note, too, that the namespace prefix, once it\nis associated with a particular attribute node, cannot be changed.", + "ownerelement": "\n\nThe read-only **`ownerElement`** property of the [Attr] interface returns the [Element] the attribute belongs to.", + "prefix": "\n\nThe read-only **`prefix`** property of the [Attr] returns the namespace prefix of the attribute, or `null` if no prefix is specified.\n\nThe prefix is always in lower case, whatever case is used at the attribute creation.\n\n> **Note:** Only XML supports namespaces. HTML does not. That means that the prefix of an attribute of an HTML element will always be `null`.\n\nAlso, only the `xml` (for the `xml:lang` attribute), `xlink` (for the `xlink:href`, `xlink:show`, `xlink:target` and `xlink:title` attributes) and `xpath` namespaces are supported, and only on SVG and MathML elements.", + "specified": "\n\nThe read-only **`specified`** property of the [Attr] interface always returns `true`.", + "value": "\n\nThe **`value`** property of the [Attr] interface contains the value of the attribute." + } + }, + "audiobuffer": { + "docs": "\n\nThe **`AudioBuffer`** interface represents a short audio asset residing in memory, created from an audio file using the [BaseAudioContext/decodeAudioData] method, or from raw data using [BaseAudioContext/createBuffer]. Once put into an AudioBuffer, the audio can then be played by being passed into an [AudioBufferSourceNode].\n\nObjects of these types are designed to hold small audio snippets, typically less than 45 s. For longer sounds, objects implementing the [MediaElementAudioSourceNode] are more suitable. The buffer contains the audio signal waveform encoded as a series of amplitudes in the following format: non-interleaved IEEE754 32-bit linear PCM with a nominal range between `-1` and `+1`, that is, a 32-bit floating point buffer, with each sample between -1.0 and 1.0. If the [AudioBuffer] has multiple channels, they are stored in separate buffers.", + "properties": { + "copyfromchannel": "\n\nThe\n**`copyFromChannel()`** method of the\n[AudioBuffer] interface copies the audio sample data from the specified\nchannel of the `AudioBuffer` to a specified\n`Float32Array`.", + "copytochannel": "\n\nThe `copyToChannel()` method of the [AudioBuffer] interface copies\nthe samples to the specified channel of the `AudioBuffer`, from the source array.", + "duration": "\n\nThe **`duration`** property of the [AudioBuffer] interface returns a double representing the duration, in seconds, of the PCM data\nstored in the buffer.", + "getchanneldata": "\n\nThe **`getChannelData()`** method of the [AudioBuffer] Interface returns a `Float32Array` containing the PCM data associated with the channel, defined by the channel parameter (with 0 representing the first channel).", + "length": "\n\nThe **`length`** property of the [AudioBuffer]\ninterface returns an integer representing the length, in sample-frames, of the PCM data\nstored in the buffer.", + "numberofchannels": "\n\nThe `numberOfChannels` property of the [AudioBuffer]\ninterface returns an integer representing the number of discrete audio channels\ndescribed by the PCM data stored in the buffer.", + "samplerate": "\n\nThe **`sampleRate`** property of the [AudioBuffer] interface returns a float representing the sample rate, in\nsamples per second, of the PCM data stored in the buffer." + } + }, + "audiobuffersourcenode": { + "docs": "\n\nThe **`AudioBufferSourceNode`** interface is an [AudioScheduledSourceNode] which represents an audio source consisting of in-memory audio data, stored in an [AudioBuffer].\n\nThis interface is especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network. To play sounds which require accurate timing but must be streamed from the network or played from disk, use a [AudioWorkletNode] to implement its playback.\n\nAn `AudioBufferSourceNode` has no inputs and exactly one output, which has the same number of channels as the `AudioBuffer` indicated by its [AudioBufferSourceNode.buffer] property. If there's no buffer set—that is, if `buffer` is `null`—the output contains a single channel of silence (every sample is 0).\n\nAn `AudioBufferSourceNode` can only be played once; after each call to [AudioBufferSourceNode.start], you have to create a new node if you want to play the same sound again. Fortunately, these nodes are very inexpensive to create, and the actual `AudioBuffer`s can be reused for multiple plays of the sound. Indeed, you can use these nodes in a \"fire and forget\" manner: create the node, call `start()` to begin playing the sound, and don't even bother to hold a reference to it. It will automatically be garbage-collected at an appropriate time, which won't be until sometime after the sound has finished playing.\n\nMultiple calls to [AudioScheduledSourceNode/stop] are allowed. The most recent call replaces the previous one, if the `AudioBufferSourceNode` has not already reached the end of the buffer.\n\n![The AudioBufferSourceNode takes the content of an AudioBuffer and m](webaudioaudiobuffersourcenode.png)\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputs0
Number of outputs1
Channel countdefined by the associated [AudioBuffer]
", + "properties": { + "buffer": "\n\nThe **`buffer`** property of the [AudioBufferSourceNode] interface provides the ability to play back audio\nusing an [AudioBuffer] as the source of the sound data.\n\nIf the `buffer` property is set to the value `null`, the node\ngenerates a single channel containing silence (that is, every sample is 0).", + "detune": "\n\nThe **`detune`** property of the\n[AudioBufferSourceNode] interface is a [k-rate](/en-US/docs/Web/API/AudioParam#k-rate) [AudioParam]\nrepresenting detuning of oscillation in [cents](https://en.wikipedia.org/wiki/Cent_%28music%29).\n\nFor example, values of +100 and -100 detune the source up or down by one semitone,\nwhile +1200 and -1200 detune it up or down by one octave.", + "loop": "\n\nThe `loop` property of the [AudioBufferSourceNode]\ninterface is a Boolean indicating if the audio asset must be replayed when the end of\nthe [AudioBuffer] is reached.\n\nThe `loop` property's default value is `false`.", + "loopend": "\n\nThe `loopEnd` property of the [AudioBufferSourceNode]\ninterface specifies is a floating point number specifying, in seconds, at what offset\ninto playing the [AudioBuffer] playback should loop back to the time\nindicated by the [AudioBufferSourceNode.loopStart] property.\nThis is only used if the [AudioBufferSourceNode.loop] property is\n`true`.", + "loopstart": "\n\nThe **`loopStart`** property of the [AudioBufferSourceNode] interface is a floating-point value indicating, in\nseconds, where in the [AudioBuffer] the restart of the play must happen.\n\nThe `loopStart` property's default value is `0`.", + "playbackrate": "\n\nThe **`playbackRate`** property of\nthe [AudioBufferSourceNode] interface Is a [k-rate](/en-US/docs/Web/API/AudioParam#k-rate) [AudioParam] that\ndefines the speed at which the audio asset will be played.\n\nA value of 1.0 indicates it should play at the same speed as its sampling rate,\nvalues less than 1.0 cause the sound to play more slowly, while values greater than\n1.0 result in audio playing faster than normal. The default value is `1.0`.\nWhen set to another value, the `AudioBufferSourceNode` resamples the audio\nbefore sending it to the output.", + "start": "\n\nThe `start()` method of the [AudioBufferSourceNode]\nInterface is used to schedule playback of the audio data contained in the buffer, or\nto begin playback immediately." + } + }, + "audiocontext": { + "docs": "\n\nThe `AudioContext` interface represents an audio-processing graph built from audio modules linked together, each represented by an [AudioNode].\n\nAn audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an `AudioContext` before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single `AudioContext` for several different audio sources and pipeline concurrently.\n\n", + "properties": { + "baselatency": "\n\nThe **`baseLatency`** read-only property of the\n[AudioContext] interface returns a double that represents the number of\nseconds of processing latency incurred by the `AudioContext` passing an audio\nbuffer from the [AudioDestinationNode] — i.e. the end of the audio graph —\ninto the host system's audio subsystem ready for playing.\n\n> **Note:** You can request a certain latency during\n> [AudioContext.AudioContext] with the\n> `latencyHint` option, but the browser may ignore the option.", + "close": "\n\nThe `close()` method of the [AudioContext] Interface closes the audio context, releasing any system audio resources that it uses.\n\nThis function does not automatically release all `AudioContext`-created objects, unless other references have been released as well; however, it will forcibly release any system audio resources that might prevent additional `AudioContexts` from being created and used, suspend the progression of audio time in the audio context, and stop processing audio data. The returned `Promise` resolves when all `AudioContext`-creation-blocking resources have been released. This method throws an `INVALID_STATE_ERR` exception if called on an [OfflineAudioContext].", + "createmediaelementsource": "\n\nThe `createMediaElementSource()` method of the [AudioContext] Interface is used to create a new [MediaElementAudioSourceNode] object, given an existing HTML `audio` or `video` element, the audio from which can then be played and manipulated.\n\nFor more details about media element audio source nodes, check out the [MediaElementAudioSourceNode] reference page.", + "createmediastreamdestination": "\n\nThe `createMediaStreamDestination()` method of the [AudioContext] Interface is used to create a new [MediaStreamAudioDestinationNode] object associated with a [WebRTC](/en-US/docs/Web/API/WebRTC_API) [MediaStream] representing an audio stream, which may be stored in a local file or sent to another computer.\n\nThe [MediaStream] is created when the node is created and is accessible via the [MediaStreamAudioDestinationNode]'s `stream` attribute. This stream can be used in a similar way as a `MediaStream` obtained via [navigator.getUserMedia] — it can, for example, be sent to a remote peer using the `addStream()` method of `RTCPeerConnection`.\n\nFor more details about media stream destination nodes, check out the [MediaStreamAudioDestinationNode] reference page.", + "createmediastreamsource": "\n\nThe `createMediaStreamSource()` method of the [AudioContext]\nInterface is used to create a new [MediaStreamAudioSourceNode]\nobject, given a media stream (say, from a [MediaDevices.getUserMedia]\ninstance), the audio from which can then be played and manipulated.\n\nFor more details about media stream audio source nodes, check out the [MediaStreamAudioSourceNode] reference page.", + "createmediastreamtracksource": "\n\nThe **`createMediaStreamTrackSource()`** method of the [AudioContext] interface creates and returns a\n[MediaStreamTrackAudioSourceNode] which represents an audio source whose\ndata comes from the specified [MediaStreamTrack].\n\nThis differs from [AudioContext.createMediaStreamSource], which creates a\n[MediaStreamAudioSourceNode] whose audio comes from the audio track in a\nspecified [MediaStream] whose [MediaStreamTrack.id] is\nfirst, lexicographically (alphabetically).", + "getoutputtimestamp": "\n\nThe\n**`getOutputTimestamp()`** method of the\n[AudioContext] interface returns a new `AudioTimestamp` object\ncontaining two audio timestamp values relating to the current audio context.\n\nThe two values are as follows:\n\n- `AudioTimestamp.contextTime`: The time of the sample frame currently\n being rendered by the audio output device (i.e., output audio stream position), in the\n same units and origin as the context's [BaseAudioContext/currentTime].\n Basically, this is the time after the audio context was first created.\n- `AudioTimestamp.performanceTime`: An estimation of the moment when the\n sample frame corresponding to the stored `contextTime` value was rendered\n by the audio output device, in the same units and origin as\n [performance.now]. This is the time after the document containing the\n audio context was first rendered.", + "outputlatency": "\n\nThe **`outputLatency`** read-only property of\nthe [AudioContext] Interface provides an estimation of the output latency\nof the current audio context.\n\nThis is the time, in seconds, between the browser passing an audio buffer out of an\naudio graph over to the host system's audio subsystem to play, and the time at which the\nfirst sample in the buffer is actually processed by the audio output device.\n\nIt varies depending on the platform and the available hardware.", + "resume": "\n\nThe **`resume()`** method of the [AudioContext]\ninterface resumes the progression of time in an audio context that has previously been\nsuspended.\n\nThis method will cause an `INVALID_STATE_ERR` exception to be thrown if\ncalled on an [OfflineAudioContext].", + "setsinkid": "\n\nThe **`setSinkId()`** method of the [AudioContext] interface sets the output audio device for the `AudioContext`. If a sink ID is not explicitly set, the default system audio output device will be used.\n\nTo set the audio device to a device different than the default one, the developer needs permission to access to audio devices. If required, the user can be prompted to grant the required permission via a [MediaDevices.getUserMedia] call.\n\nIn addition, this feature may be blocked by a [`speaker-selection`](/en-US/docs/Web/HTTP/Headers/Permissions-Policy/speaker-selection) [Permissions Policy](/en-US/docs/Web/HTTP/Permissions_Policy).", + "sinkchange_event": "\n\nThe **`sinkchange`** event of the [AudioContext] interface is fired when the output audio device (and therefore, the [AudioContext.sinkId]) has changed.", + "sinkid": "\n\nThe **`sinkId`** read-only property of the\n[AudioContext] interface returns the sink ID of the current output audio device.", + "suspend": "\n\nThe `suspend()` method of the [AudioContext] Interface suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process — this is useful if you want an application to power down the audio hardware when it will not be using an audio context for a while.\n\nThis method will cause an `INVALID_STATE_ERR` exception to be thrown if called on an [OfflineAudioContext]." + } + }, + "audiodata": { + "docs": "\n\nThe **`AudioData`** interface of the [WebCodecs API](/en-US/docs/Web/API/WebCodecs_API) represents an audio sample.\n\n`AudioData` is a [transferable object](/en-US/docs/Web/API/Web_Workers_API/Transferable_objects).", + "properties": { + "allocationsize": "\n\nThe **`allocationSize()`** method of the [AudioData] interface returns the size in bytes required to hold the current sample as filtered by options passed into the method.", + "clone": "\n\nThe **`clone()`** method of the [AudioData] interface creates a new `AudioData` object with reference to the same media resource as the original.", + "close": "\n\nThe **`close()`** method of the [AudioData] interface clears all states and releases the reference to the media resource.", + "copyto": "\n\nThe **`copyTo()`** method of the [AudioData] interface copies a plane of an `AudioData` object to a destination buffer.", + "duration": "\n\nThe **`duration`** read-only property of the [AudioData] interface returns the duration in microseconds of this `AudioData` object.", + "format": "\n\nThe **`format`** read-only property of the [AudioData] interface returns the sample format of the `AudioData` object.", + "numberofchannels": "\n\nThe **`numberOfChannels`** read-only property of the [AudioData] interface returns the number of channels in the `AudioData` object.", + "numberofframes": "\n\nThe **`numberOfFrames`** read-only property of the [AudioData] interface returns the number of frames in the `AudioData` object.", + "samplerate": "\n\nThe **`sampleRate`** read-only property of the [AudioData] interface returns the sample rate in Hz.", + "timestamp": "\n\nThe **`duration`** read-only property of the [AudioData] interface returns the timestamp of this `AudioData` object." + } + }, + "audiodecoder": { + "docs": "\n\nThe **`AudioDecoder`** interface of the [WebCodecs API] decodes chunks of audio.\n\n", + "properties": { + "close": "\n\nThe **`close()`** method of the [AudioDecoder] interface ends all pending work and releases system resources.", + "configure": "\n\nThe **`configure()`** method of the [AudioDecoder] interface enqueues a control message to configure the audio decoder for decoding chunks.", + "decode": "\n\nThe **`decode()`** method of the [AudioDecoder] interface enqueues a control message to decode a given chunk of audio.", + "decodequeuesize": "\n\nThe **`decodeQueueSize`** read-only property of the [AudioDecoder] interface returns the number of pending decode requests in the queue.", + "dequeue_event": "\n\nThe **`dequeue`** event of the [AudioDecoder] interface fires to signal a decrease in [AudioDecoder.decodeQueueSize].\n\nThis eliminates the need for developers to use a [setTimeout] poll to determine when the queue has decreased, and more work should be queued up.", + "flush": "\n\nThe **`flush()`** method of the [AudioDecoder] interface returns a Promise that resolves once all pending messages in the queue have been completed.", + "isconfigsupported_static": "\n\nThe **`isConfigSupported()`** static method of the [AudioDecoder] interface checks if the given config is supported (that is, if [AudioDecoder] objects can be successfully configured with the given config).", + "reset": "\n\nThe **`reset()`** method of the [AudioDecoder] interface resets all states including configuration, control messages in the control message queue, and all pending callbacks.", + "state": "\n\nThe **`state`** read-only property of the [AudioDecoder] interface returns the current state of the underlying codec." + } + }, + "audiodestinationnode": { + "docs": "\n\nThe `AudioDestinationNode` interface represents the end destination of an audio graph in a given context — usually the speakers of your device. It can also be the node that will \"record\" the audio data when used with an `OfflineAudioContext`.\n\n`AudioDestinationNode` has no output (as it _is_ the output, no more `AudioNode` can be linked after it in the audio graph) and one input. The number of channels in the input must be between `0` and the `maxChannelCount` value or an exception is raised.\n\nThe `AudioDestinationNode` of a given `AudioContext` can be retrieved using the [BaseAudioContext/destination] property.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputs1
Number of outputs0
Channel count mode\"explicit\"
Channel count2
Channel interpretation\"speakers\"
", + "properties": { + "maxchannelcount": "\n\nThe `maxchannelCount` property of the [AudioDestinationNode] interface is an `unsigned long` defining the maximum amount of channels that the physical device can handle.\n\nThe [AudioNode.channelCount] property can be set between 0 and this value (both included). If `maxChannelCount` is `0`, like in [OfflineAudioContext], the channel count cannot be changed." + } + }, + "audioencoder": { + "docs": "\n\nThe **`AudioEncoder`** interface of the [WebCodecs API](/en-US/docs/Web/API/WebCodecs_API) encodes [AudioData] objects.\n\n", + "properties": { + "close": "\n\nThe **`close()`** method of the [AudioEncoder] interface ends all pending work and releases system resources.", + "configure": "\n\nThe **`configure()`** method of the [AudioEncoder] interface enqueues a control message to configure the audio encoder for encoding chunks.", + "dequeue_event": "\n\nThe **`dequeue`** event of the [AudioEncoder] interface fires to signal a decrease in [AudioEncoder.encodeQueueSize].\n\nThis eliminates the need for developers to use a [setTimeout] poll to determine when the queue has decreased, and more work should be queued up.", + "encode": "\n\nThe **`encode()`** method of the [AudioEncoder] interface enqueues a control message to encode a given [AudioData] object.", + "encodequeuesize": "\n\nThe **`encodeQueueSize`** read-only property of the [AudioEncoder] interface returns the number of pending encode requests in the queue.", + "flush": "\n\nThe **`flush()`** method of the [AudioEncoder] interface returns a Promise that resolves once all pending messages in the queue have been completed.", + "isconfigsupported_static": "\n\nThe **`isConfigSupported()`** static method of the [AudioEncoder] interface checks if the given config is supported (that is, if [AudioEncoder] objects can be successfully configured with the given config).", + "reset": "\n\nThe **`reset()`** method of the [AudioEncoder] interface resets all states including configuration, control messages in the control message queue, and all pending callbacks.", + "state": "\n\nThe **`state`** read-only property of the [AudioEncoder] interface returns the current state of the underlying codec." + } + }, + "audiolistener": { + "docs": "\n\nThe `AudioListener` interface represents the position and orientation of the unique person listening to the audio scene, and is used in [audio spatialization](/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics). All [PannerNode]s spatialize in relation to the `AudioListener` stored in the [BaseAudioContext.listener] attribute.\n\nIt is important to note that there is only one listener per context and that it isn't an [AudioNode].\n\n![We see the position, up and front vectors of an AudioListener, with the up and front vectors at 90° from the other.](webaudiolistenerreduced.png)", + "properties": { + "forwardx": "\n\nThe `forwardX` read-only property of the [AudioListener] interface is an [AudioParam] representing the x value of the direction vector defining the forward direction the listener is pointing in.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "forwardy": "\n\nThe `forwardY` read-only property of the [AudioListener] interface is an [AudioParam] representing the y value of the direction vector defining the forward direction the listener is pointing in.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "forwardz": "\n\nThe `forwardZ` read-only property of the [AudioListener] interface is an [AudioParam] representing the z value of the direction vector defining the forward direction the listener is pointing in.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "positionx": "\n\nThe `positionX` read-only property of the [AudioListener] interface is an [AudioParam] representing the x position of the listener in 3D cartesian space.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "positiony": "\n\nThe `positionY` read-only property of the [AudioListener] interface is an [AudioParam] representing the y position of the listener in 3D cartesian space.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "positionz": "\n\nThe `positionZ` read-only property of the [AudioListener] interface is an [AudioParam] representing the z position of the listener in 3D cartesian space.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "setorientation": "\n\nThe `setOrientation()` method of the [AudioListener] interface defines the orientation of the listener.\n\nIt consists of two direction vectors:\n\n- The _front vector_, defined by the three unitless parameters `x`, `y` and `z`, describes the direction of the face of the listener, that is the direction the nose of the person is pointing towards. The front vector's default value is `(0, 0, -1)`.\n- The _up vector_, defined by three unitless parameters `xUp`, `yUp` and `zUp`, describes the direction of the top of the listener's head. The up vector's default value is `(0, 1, 0)`.\n\nThe two vectors must be separated by an angle of 90° — in linear analysis terms, they must be perpendicular to each other.", + "setposition": " \n\nThe `setPosition()` method of the [AudioListener] Interface defines the position of the listener.\n\nThe three parameters `x`, `y` and `z` are unitless and describe the listener's position in 3D space according to the right-hand Cartesian coordinate system. [PannerNode] objects use this position relative to individual audio sources for spatialization.\n\nThe default value of the position vector is `(0, 0, 0)`.\n\n> **Note:** As this method is deprecated, use the three [AudioListener.positionX], [AudioListener.positionY], and [AudioListener.positionZ] properties instead.", + "upx": "\n\nThe `upX` read-only property of the [AudioListener] interface is an [AudioParam] representing the x value of the direction vector defining the up direction the listener is pointing in.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "upy": "\n\nThe `upY` read-only property of the [AudioListener] interface is an [AudioParam] representing the y value of the direction vector defining the up direction the listener is pointing in.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise.", + "upz": "\n\nThe `upZ` read-only property of the [AudioListener] interface is an [AudioParam] representing the z value of the direction vector defining the up direction the listener is pointing in.\n\n> **Note:** The parameter is _a-rate_ when used with a [PannerNode] whose [PannerNode.panningModel] is set to equalpower, or _k-rate_ otherwise." + } + }, + "audionode": { + "docs": "\n\nThe **`AudioNode`** interface is a generic interface for representing an audio processing module.\n\nExamples include:\n\n- an audio source (e.g. an HTML `audio` or `video` element, an [OscillatorNode], etc.),\n- the audio destination,\n- intermediate processing module (e.g. a filter like [BiquadFilterNode] or [ConvolverNode]), or\n- volume control (like [GainNode])\n\n> **Note:** An `AudioNode` can be target of events, therefore it implements the [EventTarget] interface.", + "properties": { + "channelcount": "\n\nThe **`channelCount`** property of the [AudioNode] interface represents an integer used to determine how many channels are used when [up-mixing and down-mixing](/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API#up-mixing_and_down-mixing) connections to any inputs to the node.\n\n`channelCount`'s usage and precise definition depend on the value of [AudioNode.channelCountMode]:\n\n- It is ignored if the `channelCountMode` value is `max`.\n- It is used as a maximum value if the `channelCountMode` value is `clamped-max`.\n- It is used as the exact value if the `channelCountMode` value is `explicit`.", + "channelcountmode": "\n\nThe `channelCountMode` property of the [AudioNode] interface represents an enumerated value describing the way channels must be matched between the node's inputs and outputs.", + "channelinterpretation": "\n\nThe **`channelInterpretation`** property of the [AudioNode] interface represents an enumerated value describing how input channels are mapped to output channels when the number of inputs/outputs is different. For example, this setting defines how a mono input will be up-mixed to a stereo or 5.1 channel output, or how a quad channel input will be down-mixed to a stereo or mono output.\n\nThe property has two options: `speakers` and `discrete`. These are documented in [Basic concepts behind Web Audio API > up-mixing and down-mixing](/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API#up-mixing_and_down-mixing).", + "connect": "\n\nThe `connect()` method of the [AudioNode] interface lets\nyou connect one of the node's outputs to a target, which may be either another\n`AudioNode` (thereby directing the sound data to the specified node) or an\n[AudioParam], so that the node's output data is automatically used to\nchange the value of that parameter over time.", + "context": "\n\nThe read-only `context` property of the\n[AudioNode] interface returns the associated\n[BaseAudioContext], that is the object representing the processing graph\nthe node is participating in.", + "disconnect": "\n\nThe **`disconnect()`** method of the [AudioNode] interface lets you disconnect one or more nodes from the node on which the method is called.", + "numberofinputs": "\n\nThe `numberOfInputs` property of\nthe [AudioNode] interface returns the number of inputs feeding the\nnode. Source nodes are defined as nodes having a `numberOfInputs`\nproperty with a value of 0.", + "numberofoutputs": "\n\nThe `numberOfOutputs` property of\nthe [AudioNode] interface returns the number of outputs coming out of\nthe node. Destination nodes — like [AudioDestinationNode] — have\na value of 0 for this attribute." + } + }, + "audioparam": { + "docs": "\n\nThe Web Audio API's `AudioParam` interface represents an audio-related parameter, usually a parameter of an [AudioNode] (such as [GainNode.gain]).\n\nAn `AudioParam` can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern.\n\nEach `AudioParam` has a list of events, initially empty, that define when and how values change. When this list is not empty, changes using the `AudioParam.value` attributes are ignored. This list of events allows us to schedule changes that have to happen at very precise times, using arbitrary timeline-based automation curves. The time used is the one defined in [BaseAudioContext/currentTime].", + "properties": { + "cancelandholdattime": "\n\nThe **`cancelAndHoldAtTime()`** method of the\n[AudioParam] interface cancels all scheduled future changes to the\n`AudioParam` but holds its value at a given time until further changes are\nmade using other methods.", + "cancelscheduledvalues": "\n\nThe `cancelScheduledValues()` method of the [AudioParam]\nInterface cancels all scheduled future changes to the `AudioParam`.", + "defaultvalue": "\n\nThe **`defaultValue`**\nread-only property of the [AudioParam] interface represents the initial\nvalue of the attributes as defined by the specific [AudioNode] creating\nthe `AudioParam`.", + "exponentialramptovalueattime": "\n\nThe **`exponentialRampToValueAtTime()`** method of the [AudioParam] Interface schedules a gradual exponential change in the value\nof the [AudioParam]. The change starts at the time specified for the\n_previous_ event, follows an exponential ramp to the new value given in the\n`value` parameter, and reaches the new value at the time given in the\n`endTime` parameter.\n\n> **Note:** Exponential ramps are considered more useful when changing\n> frequencies or playback rates than linear ramps because of the way the human ear\n> works.", + "linearramptovalueattime": "\n\nThe `linearRampToValueAtTime()` method of the [AudioParam]\nInterface schedules a gradual linear change in the value of the\n`AudioParam`. The change starts at the time specified for the\n_previous_ event, follows a linear ramp to the new value given in the\n`value` parameter, and reaches the new value at the time given in the\n`endTime` parameter.", + "maxvalue": "\n\nThe **`maxValue`**\nread-only property of the [AudioParam] interface represents the maximum\npossible value for the parameter's nominal (effective) range.", + "minvalue": "\n\nThe **`minValue`**\nread-only property of the [AudioParam] interface represents the minimum\npossible value for the parameter's nominal (effective) range.", + "settargetattime": "\n\nThe `setTargetAtTime()` method of the\n[AudioParam] interface schedules the start of a gradual change to the\n`AudioParam` value. This is useful for decay or release portions of ADSR\nenvelopes.", + "setvalueattime": "\n\nThe `setValueAtTime()` method of the\n[AudioParam] interface schedules an instant change to the\n`AudioParam` value at a precise time, as measured against\n[BaseAudioContext/currentTime]. The new value is given in the value parameter.", + "setvaluecurveattime": "\n\nThe\n**`setValueCurveAtTime()`** method of the\n[AudioParam] interface schedules the parameter's value to change\nfollowing a curve defined by a list of values.\n\nThe curve is a linear\ninterpolation between the sequence of values defined in an array of floating-point\nvalues, which are scaled to fit into the given interval starting at\n`startTime` and a specific duration.", + "value": "\n\nThe [Web Audio API's](/en-US/docs/Web/API/Web_Audio_API)\n[AudioParam] interface property **`value`** gets\nor sets the value of this [AudioParam] at the current time. Initially, the value is set to [AudioParam.defaultValue].\n\nSetting `value` has the same effect as\ncalling [AudioParam.setValueAtTime] with the time returned by the\n`AudioContext`'s [BaseAudioContext/currentTime]\nproperty." + } + }, + "audioparamdescriptor": { + "docs": "\n\nThe **`AudioParamDescriptor`** dictionary of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) specifies properties for [AudioParam] objects.\n\nIt is used to create custom `AudioParam`s on an [AudioWorkletNode]. If the underlying [AudioWorkletProcessor] has a [AudioWorkletProcessor.parameterDescriptors] static getter, then the returned array of objects based on this dictionary is used internally by `AudioWorkletNode` constructor to populate its [AudioWorkletNode.parameters] property accordingly." + }, + "audioparammap": { + "docs": "\n\nThe **`AudioParamMap`** interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) represents an iterable and read-only set of multiple audio parameters.\n\nAn `AudioParamMap` instance is a read-only [`Map`-like object](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#map-like_browser_apis), in which each key is the name string for a parameter, and the corresponding value is an [AudioParam] containing the value of that parameter." + }, + "audioprocessingevent": { + "docs": "\n\nThe `AudioProcessingEvent` interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) represents events that occur when a [ScriptProcessorNode] input buffer is ready to be processed.\n\nAn `audioprocess` event with this interface is fired on a [ScriptProcessorNode] when audio processing is required. During audio processing, the input buffer is read and processed to produce output audio data, which is then written to the output buffer.\n\n> **Warning:** This feature has been deprecated and should be replaced by an [`AudioWorklet`](/en-US/docs/Web/API/AudioWorklet).\n\n", + "properties": { + "inputbuffer": "\n\nThe **`inputBuffer`** read-only property of the [AudioProcessingEvent] interface represents the input buffer of an audio processing event.\n\nThe input buffer is represented by an [AudioBuffer] object, which contains a collection of audio channels, each of which is an array of floating-point values representing the audio signal waveform encoded as a series of amplitudes. The number of channels and the length of each channel are determined by the channel count and buffer size properties of the `AudioBuffer`.", + "outputbuffer": "\n\nThe **`outputBuffer`** read-only property of the [AudioProcessingEvent] interface represents the output buffer of an audio processing event.\n\nThe output buffer is represented by an [AudioBuffer] object, which contains a collection of audio channels, each of which is an array of floating-point values representing the audio signal waveform encoded as a series of amplitudes. The number of channels and the length of each channel are determined by the channel count and buffer size properties of the `AudioBuffer`.", + "playbacktime": "\n\nThe **`playbackTime`** read-only property of the [AudioProcessingEvent] interface represents the time when the audio will be played. It is in the same coordinate system as the time used by the [AudioContext]." + } + }, + "audioscheduledsourcenode": { + "docs": "\n\nThe `AudioScheduledSourceNode` interface—part of the Web Audio API—is a parent interface for several types of audio source node interfaces which share the ability to be started and stopped, optionally at specified times. Specifically, this interface defines the [AudioScheduledSourceNode.start] and [AudioScheduledSourceNode.stop] methods, as well as the [AudioScheduledSourceNode.ended_event] event.\n\n> **Note:** You can't create an `AudioScheduledSourceNode` object directly. Instead, use an interface which extends it, such as [AudioBufferSourceNode], [OscillatorNode] or [ConstantSourceNode].\n\nUnless stated otherwise, nodes based upon `AudioScheduledSourceNode` output silence when not playing (that is, before `start()` is called and after `stop()` is called). Silence is represented, as always, by a stream of samples with the value zero (0).\n\n", + "properties": { + "ended_event": "`Web Audio API`\n\nThe `ended` event of the [AudioScheduledSourceNode] interface is fired when the source node has stopped playing.\n\nThis event occurs when a [AudioScheduledSourceNode] has stopped playing, either because it's reached a predetermined stop time, the full duration of the audio has been performed, or because the entire buffer has been played.\n\nThis event is not cancelable and does not bubble.", + "start": "\n\nThe `start()` method on\n[AudioScheduledSourceNode] schedules a sound to begin playback at the\nspecified time. If no time is specified, then the sound begins playing\nimmediately.", + "stop": "\n\nThe `stop()` method on [AudioScheduledSourceNode] schedules a\nsound to cease playback at the specified time. If no time is specified, then the sound\nstops playing immediately.\n\nEach time you call `stop()` on the same node, the specified time replaces\nany previously-scheduled stop time that hasn't occurred yet. If the node has already\nstopped, this method has no effect.\n\n> **Note:** If a scheduled stop time occurs before the node's scheduled\n> start time, the node never starts to play." + } + }, + "audiosinkinfo": { + "docs": "\n\nThe **`AudioSinkInfo`** interface of the [Web Audio API] represents information describing an [AudioContext]'s sink ID, retrieved via [AudioContext.sinkId].\n\n", + "properties": { + "type": "\n\nThe **`type`** read-only property of the [AudioSinkInfo] interface returns the type of the audio output device." + } + }, + "audiotrack": { + "docs": "\n\nThe **`AudioTrack`** interface represents a single audio track from one of the HTML media elements, `audio` or `video`.\n\nThe most common use for accessing an `AudioTrack` object is to toggle its [AudioTrack.enabled] property in order to mute and unmute the track.", + "properties": { + "enabled": "\n\nThe **[AudioTrack]** property\n**`enabled`** specifies whether or not the described audio\ntrack is currently enabled for use. If the track is disabled by setting\n`enabled` to `false`, the track is muted and does not produce\naudio.", + "id": "\n\nThe **`id`** property contains a\nstring which uniquely identifies the track represented by the\n**[AudioTrack]**.\n\nThis ID can be used with the\n[AudioTrackList.getTrackById] method to locate a specific track within\nthe media associated with a media element. The track ID can also be used as the fragment of a URL that loads the specific track\n(if the media supports media fragments).", + "kind": "\n\nThe **`kind`** property contains a\nstring indicating the category of audio contained in the\n**[AudioTrack]**.\n\nThe `kind` can be used\nto determine the scenarios in which specific tracks should be enabled or disabled. See\n[Audio track kind strings](#audio_track_kind_strings) for a list of the kinds available for audio tracks.", + "label": "\n\nThe read-only **[AudioTrack]**\nproperty **`label`** returns a string specifying the audio\ntrack's human-readable label, if one is available; otherwise, it returns an empty\nstring.", + "language": "\n\nThe read-only **[AudioTrack]**\nproperty **`language`** returns a string identifying the\nlanguage used in the audio track.\n\nFor tracks that include multiple languages\n(such as a movie in English in which a few lines are spoken in other languages), this\nshould be the video's primary language.", + "sourcebuffer": "\n\nThe read-only **[AudioTrack]**\nproperty **`sourceBuffer`** returns the\n[SourceBuffer] that created the track, or null if the track was not\ncreated by a [SourceBuffer] or the [SourceBuffer] has been\nremoved from the [MediaSource.sourceBuffers] attribute of its parent\nmedia source." + } + }, + "audiotracklist": { + "docs": "\n\nThe **`AudioTrackList`** interface is used to represent a list of the audio tracks contained within a given HTML media element, with each track represented by a separate [AudioTrack] object in the list.\n\nRetrieve an instance of this object with [HTMLMediaElement.audioTracks]. The individual tracks can be accessed using array syntax.\n\n", + "properties": { + "addtrack_event": "\n\nThe `addtrack` event is fired when a track is added to an [`AudioTrackList`](/en-US/docs/Web/API/AudioTrackList).", + "change_event": "\n\nThe `change` event is fired when an audio track is enabled or disabled, for example by changing the track's [`enabled`](/en-US/docs/Web/API/AudioTrack/enabled) property.\n\nThis event is not cancelable and does not bubble.", + "gettrackbyid": "\n\nThe **[AudioTrackList]** method\n**`getTrackById()`** returns the first\n[AudioTrack] object from the track list whose [AudioTrack.id] matches the specified string. This lets you find a specified track if\nyou know its ID string.", + "length": "\n\nThe read-only **[AudioTrackList]**\nproperty **`length`** returns the number of entries in the\n`AudioTrackList`, each of which is an [AudioTrack]\nrepresenting one audio track in the media element. A value of 0 indicates that\nthere are no audio tracks in the media.", + "removetrack_event": "\n\nThe `removetrack` event is fired when a track is removed from an [`AudioTrackList`](/en-US/docs/Web/API/AudioTrackList)." + } + }, + "audioworklet": { + "docs": "\n\nThe **`AudioWorklet`** interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) is used to supply custom audio processing scripts that execute in a separate thread to provide very low latency audio processing.\n\nThe worklet's code is run in the [AudioWorkletGlobalScope] global execution context, using a separate Web Audio thread which is shared by the worklet and other audio nodes.\n\nAccess the audio context's instance of `AudioWorklet` through the [BaseAudioContext.audioWorklet] property.\n\n" + }, + "audioworkletglobalscope": { + "docs": "\n\nThe **`AudioWorkletGlobalScope`** interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) represents a global execution context for user-supplied code, which defines custom [AudioWorkletProcessor]-derived classes.\n\nEach [BaseAudioContext] has a single [AudioWorklet] available under the [BaseAudioContext.audioWorklet] property, which runs its code in a single `AudioWorkletGlobalScope`.\n\nAs the global execution context is shared across the current `BaseAudioContext`, it's possible to define any other variables and perform any actions allowed in worklets — apart from defining `AudioWorkletProcessor` derived classes.\n\n", + "properties": { + "currentframe": "\n\nThe read-only **`currentFrame`** property of the [AudioWorkletGlobalScope] interface returns an integer that represents the ever-increasing current sample-frame of the audio block being processed. It is incremented by 128 (the size of a render quantum) after the processing of each audio block.", + "currenttime": "\n\nThe read-only **`currentTime`** property of the [AudioWorkletGlobalScope] interface returns a double that represents the ever-increasing context time of the audio block being processed. It is equal to the [BaseAudioContext.currentTime] property of the [BaseAudioContext] the worklet belongs to.", + "registerprocessor": "\n\nThe **`registerProcessor`** method of the\n[AudioWorkletGlobalScope] interface registers a class constructor derived\nfrom [AudioWorkletProcessor] interface under a specified _name_.", + "samplerate": "\n\nThe read-only **`sampleRate`** property of the [AudioWorkletGlobalScope] interface returns a float that represents the sample rate of the associated [BaseAudioContext] the worklet belongs to." + } + }, + "audioworkletnode": { + "docs": "\n\n> **Note:** Although the interface is available outside [secure contexts](/en-US/docs/Web/Security/Secure_Contexts), the [BaseAudioContext.audioWorklet] property is not, thus custom [AudioWorkletProcessor]s cannot be defined outside them.\n\nThe **`AudioWorkletNode`** interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) represents a base class for a user-defined [AudioNode], which can be connected to an audio routing graph along with other nodes. It has an associated [AudioWorkletProcessor], which does the actual audio processing in a Web Audio rendering thread.\n\n", + "properties": { + "parameters": "\n\nThe read-only **`parameters`** property of the\n[AudioWorkletNode] interface returns the associated\n[AudioParamMap] — that is, a `Map`-like collection of\n[AudioParam] objects. They are instantiated during creation of the\nunderlying [AudioWorkletProcessor] according to its\n[AudioWorkletProcessor.parameterDescriptors] static\ngetter.", + "port": "\n\nThe read-only **`port`** property of the\n[AudioWorkletNode] interface returns the associated\n[MessagePort]. It can be used to communicate between the node and its\nassociated [AudioWorkletProcessor].\n\n> **Note:** The port at the other end of the channel is\n> available under the [AudioWorkletProcessor.port] property of the\n> processor.", + "processorerror_event": "\n\nThe `processorerror` event fires when the underlying [AudioWorkletProcessor] behind the node throws an exception in its constructor, the [AudioWorkletProcessor.process] method, or any user-defined class method.\n\nOnce an exception is thrown, the processor (and thus the node) will output silence throughout its lifetime." + } + }, + "audioworkletprocessor": { + "docs": "\n\nThe **`AudioWorkletProcessor`** interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) represents an audio processing code behind a custom [AudioWorkletNode]. It lives in the [AudioWorkletGlobalScope] and runs on the Web Audio rendering thread. In turn, an [AudioWorkletNode] based on it runs on the main thread.", + "properties": { + "parameterdescriptors": "\n\nThe read-only **`parameterDescriptors`** property of an [AudioWorkletProcessor]-derived class is a _static getter_,\nwhich returns an iterable of [AudioParamDescriptor]-based objects.\n\nThe property is not a part of the [AudioWorkletProcessor]\ninterface, but, if defined, it is called internally by the\n[AudioWorkletProcessor] constructor to create a list of custom\n[AudioParam] objects in the [AudioWorkletNode.parameters] property of the associated [AudioWorkletNode].\n\nDefining the getter is optional.", + "port": "\n\nThe read-only **`port`** property of the\n[AudioWorkletProcessor] interface returns the associated\n[MessagePort]. It can be used to communicate between the processor and the\n[AudioWorkletNode] to which it belongs.\n\n> **Note:** The port at the other end of the channel is\n> available under the [AudioWorkletNode.port] property of the node.", + "process": "\n\nThe **`process()`**\nmethod of an [AudioWorkletProcessor]-derived class implements the audio\nprocessing algorithm for the audio processor worklet.\n\nAlthough the method is\nnot a part of the [AudioWorkletProcessor] interface, any implementation\nof `AudioWorkletProcessor` must provide a `process()` method.\n\nThe method is called synchronously from the audio rendering thread, once for each block\nof audio (also known as a rendering quantum) being directed through the processor's\ncorresponding [AudioWorkletNode]. In other words, every time a new block of\naudio is ready for your processor to manipulate, your `process()` function is\ninvoked to do so.\n\n> **Note:** Currently, audio data blocks are always 128 frames\n> long—that is, they contain 128 32-bit floating-point samples for each of the inputs'\n> channels. However, plans are already in place to revise the specification to allow the\n> size of the audio blocks to be changed depending on circumstances (for example, if the\n> audio hardware or CPU utilization is more efficient with larger block sizes).\n> Therefore, you _must always check the size of the sample array_ rather than\n> assuming a particular size.\n>\n> This size may even be allowed to change over time, so you mustn't look at just the\n> first block and assume the sample buffers will always be the same size." + } + }, + "authenticatorassertionresponse": { + "docs": "\n\nThe **`AuthenticatorAssertionResponse`** interface of the [Web Authentication API](/en-US/docs/Web/API/Web_Authentication_API) contains a [digital signature](/en-US/docs/Glossary/Signature/Security) from the private key of a particular WebAuthn credential. The relying party's server can verify this signature to authenticate a user, for example when they sign in.\n\nAn `AuthenticatorAssertionResponse` object instance is available in the [PublicKeyCredential.response] property of a [PublicKeyCredential] object returned by a successful [CredentialsContainer.get] call.\n\nThis interface inherits from [AuthenticatorResponse].\n\n> **Note:** This interface is restricted to top-level contexts. Use from within an `iframe` element will not have any effect.", + "properties": { + "authenticatordata": "\n\nThe **`authenticatorData`** property of the [AuthenticatorAssertionResponse] interface returns an `ArrayBuffer` containing information from the authenticator such as the Relying Party ID Hash (rpIdHash), a signature counter, test of user presence, user verification flags, and any extensions processed by the authenticator.", + "signature": "\n\nThe **`signature`** read-only property of the\n[AuthenticatorAssertionResponse] interface is an `ArrayBuffer`\nobject which is the signature of the authenticator for both\n[AuthenticatorAssertionResponse.authenticatorData] and a SHA-256 hash of\nthe client data\n([AuthenticatorResponse.clientDataJSON]).\n\nThis signature will be sent to the server for control, as part of the response. It\nprovides the proof that an authenticator does possess the private key which was used for\nthe credential's generation.", + "userhandle": "\n\nThe **`userHandle`** read-only property of the [AuthenticatorAssertionResponse] interface is an `ArrayBuffer` object providing an opaque identifier for the given user. Such an identifier can be used by the relying party's server to link the user account with its corresponding credentials and other data.\n\nThis value is specified as `user.id` in the options passed to the originating [CredentialsContainer.create] call." + } + }, + "authenticatorattestationresponse": { + "docs": "\n\nThe **`AuthenticatorAttestationResponse`** interface of the [Web Authentication API](/en-US/docs/Web/API/Web_Authentication_API) is the result of a WebAuthn credential registration. It contains information about the credential that the server needs to perform WebAuthn assertions, such as its credential ID and public key.\n\nAn `AuthenticatorAttestationResponse` object instance is available in the [PublicKeyCredential.response] property of a [PublicKeyCredential] object returned by a successful [CredentialsContainer.create] call.\n\nThis interface inherits from [AuthenticatorResponse].\n\n> **Note:** This interface is restricted to top-level contexts. Use of its features from within an `iframe` element will not have any effect.", + "properties": { + "attestationobject": "\n\nThe **`attestationObject`** property of the\n[AuthenticatorAttestationResponse] interface returns an\n`ArrayBuffer` containing the new public key, as well as signature over the\nentire `attestationObject` with a private key that is stored in the\nauthenticator when it is manufactured.\n\nAs part of the [CredentialsContainer.create] call, an authenticator will\ncreate a new keypair as well as an `attestationObject` for that keypair. The public key\nthat corresponds to the private key that has created the attestation signature is well\nknown; however, there are various well known attestation public key chains for different\necosystems (for example, Android or TPM attestations).", + "getauthenticatordata": "\n\nThe **`getAuthenticatorData()`** method of the [AuthenticatorAttestationResponse] interface returns an `ArrayBuffer` containing the authenticator data contained within the [AuthenticatorAttestationResponse.attestationObject] property.\n\nThis is a convenience function, created to allow easy access to the authenticator data without having to write extra parsing code to extract it from the `attestationObject`.", + "getpublickey": "\n\nThe **`getPublicKey()`** method of the [AuthenticatorAttestationResponse] interface returns an `ArrayBuffer` containing the DER `SubjectPublicKeyInfo` of the new credential (see [Subject Public Key Info](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.7)), or `null` if this is not available.\n\nThis is a convenience function, created to allow easy access to the public key. This key will need to be stored in order to verify future authentication operations (i.e., using [CredentialsContainer.get]).", + "getpublickeyalgorithm": "\n\nThe **`getPublicKeyAlgorithm()`** method of the [AuthenticatorAttestationResponse] interface returns a number that is equal to a [COSE Algorithm Identifier](https://www.iana.org/assignments/cose/cose.xhtml#algorithms), representing the cryptographic algorithm used for the new credential.\n\nThis is a convenience function created to allow easy access to the algorithm type. This information will need to be stored in order to verify future authentication operations (i.e., using [CredentialsContainer.get]).", + "gettransports": "\n\nThe **`getTransports()`** method of the [AuthenticatorAttestationResponse] interface returns an array of strings describing the different transports which may be used by the authenticator.\n\nSuch transports may be USB, NFC, BLE, internal (applicable when the authenticator is not removable from the device), or a hybrid approach. Sites should not interpret this array but instead store it along with the rest of the credential information. In a subsequent [CredentialsContainer.get] call, the `transports` value(s) specified inside `publicKey.allowCredentials` should be set to the stored array value. This provides a hint to the browser as to which types of authenticators to try when making an assertion for this credential." + } + }, + "authenticatorresponse": { + "docs": "\n\nThe **`AuthenticatorResponse`** interface of the [Web Authentication API](/en-US/docs/Web/API/Web_Authentication_API) is the base interface for interfaces that provide a cryptographic root of trust for a key pair. The child interfaces include information from the browser such as the challenge origin and either may be returned from [PublicKeyCredential.response].", + "properties": { + "clientdatajson": "\n\nThe **`clientDataJSON`** property of the [AuthenticatorResponse] interface stores a [JSON](/en-US/docs/Learn/JavaScript/Objects/JSON) string in an\n`ArrayBuffer`, representing the client data that was passed to [CredentialsContainer.create] or [CredentialsContainer.get]. This property is only accessed on one of the child objects of `AuthenticatorResponse`, specifically [AuthenticatorAttestationResponse] or [AuthenticatorAssertionResponse]." + } + }, + "backgroundfetchevent": { + "docs": "\n\nThe **`BackgroundFetchEvent`** interface of the [Background Fetch API] is the event type for background fetch events dispatched on the [ServiceWorkerGlobalScope].\n\nIt is the event type passed to `onbackgroundfetchabort` and `onbackgroundfetchclick`.\n\n", + "properties": { + "registration": "\n\nThe **`registration`** read-only property of the [BackgroundFetchEvent] interface returns a [BackgroundFetchRegistration] object." + } + }, + "backgroundfetchmanager": { + "docs": "\n\nThe **`BackgroundFetchManager`** interface of the [Background Fetch API] is a map where the keys are background fetch IDs and the values are [BackgroundFetchRegistration] objects.", + "properties": { + "fetch": "\n\nThe **`fetch()`** method of the [BackgroundFetchManager] interface initiates a background fetch operation, given one or more URLs or [Request] objects.", + "get": "\n\nThe **`get()`** method of the [BackgroundFetchManager] interface returns a `Promise` that resolves with the [BackgroundFetchRegistration] associated with the provided `id` or `undefined` if the `id` is not found.", + "getids": "\n\nThe **`getIds()`** method of the [BackgroundFetchManager] interface returns the IDs of all registered background fetches." + } + }, + "backgroundfetchrecord": { + "docs": "\n\nThe **`BackgroundFetchRecord`** interface of the [Background Fetch API] represents an individual request and response.\n\nA `BackgroundFetchRecord` is created by the [BackgroundFetchRegistration.match] method, therefore there is no constructor for this interface.\n\nThere will be one `BackgroundFetchRecord` for each resource requested by `fetch()`.", + "properties": { + "request": "\n\nThe **`request`** read-only property of the [BackgroundFetchRecord] interface returns the details of the resource to be fetched.", + "responseready": "\n\nThe **`responseReady`** read-only property of the [BackgroundFetchRecord] interface returns a `Promise` that resolves with a [Response]." + } + }, + "backgroundfetchregistration": { + "docs": "\n\nThe **`BackgroundFetchRegistration`** interface of the [Background Fetch API] represents an individual background fetch.\n\nA `BackgroundFetchRegistration` instance is returned by the [BackgroundFetchManager.fetch] or [BackgroundFetchManager.get] methods, and therefore there has no constructor.\n\n", + "properties": { + "abort": "\n\nThe **`abort()`** method of the [BackgroundFetchRegistration] interface aborts an active background fetch.", + "downloaded": "\n\nThe **`downloaded`** read-only property of the [BackgroundFetchRegistration] interface returns the size in bytes that has been downloaded, initially `0`.\n\nIf the value of this property changes, the [progress](/en-US/docs/Web/API/BackgroundFetchRegistration/progress_event) event is fired at the associated [BackgroundFetchRegistration] object.", + "downloadtotal": "\n\nThe **`downloadTotal`** read-only property of the [BackgroundFetchRegistration] interface returns the total size in bytes of this download. This is set when the background fetch was registered, or `0` if not set.", + "failurereason": "\n\nThe **`failureReason`** read-only property of the [BackgroundFetchRegistration] interface returns a string with a value that indicates a reason for a background fetch failure.\n\nIf the value of this property changes, the [progress](/en-US/docs/Web/API/BackgroundFetchRegistration/progress_event) event is fired at the associated [BackgroundFetchRegistration] object.", + "id": "\n\nThe **`id`** read-only property of the [BackgroundFetchRegistration] interface returns a copy of the background fetch's `ID`.", + "match": "\n\nThe **`match()`** method of the [BackgroundFetchRegistration] interface returns the first matching [BackgroundFetchRecord].", + "matchall": "\n\nThe **`matchAll()`** method of the [BackgroundFetchRegistration] interface returns an array of matching [BackgroundFetchRecord] objects.", + "progress_event": "\n\nThe **`progress`** event of the [BackgroundFetchRegistration] interface thrown when the associated background fetch progresses.\n\nPractically, this event is fired when any of the following properties will return a new value:\n\n- [BackgroundFetchRegistration.uploaded],\n- [BackgroundFetchRegistration.downloaded],\n- [BackgroundFetchRegistration.result], or\n- [BackgroundFetchRegistration.failureReason].", + "recordsavailable": "\n\nThe **`recordsAvailable`** read-only property of the [BackgroundFetchRegistration] interface returns `true` if there are requests and responses to be accessed. If this returns false then [BackgroundFetchRegistration.match] and [BackgroundFetchRegistration.matchAll] can't be used.", + "result": "\n\nThe **`result`** read-only property of the [BackgroundFetchRegistration] interface returns a string indicating whether the background fetch was successful or failed.\n\nIf the value of this property changes, the [progress](/en-US/docs/Web/API/BackgroundFetchRegistration/progress_event) event is fired at the associated [BackgroundFetchRegistration] object.", + "uploaded": "\n\nThe **`uploaded`** read-only property of the [BackgroundFetchRegistration] interface returns the size in bytes successfully sent, initially `0`.\n\nIf the value of this property changes, the [progress](/en-US/docs/Web/API/BackgroundFetchRegistration/progress_event) event is fired at the associated [BackgroundFetchRegistration] object.", + "uploadtotal": "\n\nThe **`uploadTotal`** read-only property of the [BackgroundFetchRegistration] interface returns the total number of bytes to be sent to the server." + } + }, + "backgroundfetchupdateuievent": { + "docs": "\n\nThe **`BackgroundFetchUpdateUIEvent`** interface of the [Background Fetch API] is an event type for the [ServiceWorkerGlobalScope.backgroundfetchsuccess_event] and [ServiceWorkerGlobalScope.backgroundfetchfail_event] events, and provides a method for updating the title and icon of the app to inform a user of the success or failure of a background fetch.\n\n", + "properties": { + "updateui": "\n\nThe **`updateUI()`** method of the [BackgroundFetchUpdateUIEvent] interface updates the title and icon in the user interface to show the status of a background fetch.\n\nThis method may only be run once, to notify the user on a failed or a successful fetch." + } + }, + "barcodedetector": { + "docs": "\n\nThe **`BarcodeDetector`** interface of the [Barcode Detection API] allows detection of linear and two dimensional barcodes in images.", + "properties": { + "detect": "\n\nThe **`detect()`** method of the\n[BarcodeDetector] interface returns a `Promise` which fulfills\nwith an `Array` of detected barcodes within an image.", + "getsupportedformats_static": "\n\nThe **`getSupportedFormats()`** static method\nof the [BarcodeDetector] interface returns a `Promise` which\nfulfills with an `Array` of supported barcode format types." + } + }, + "barprop": { + "docs": "\n\nThe **`BarProp`** interface of the [Document Object Model] represents the web browser user interface elements that are exposed to scripts in web pages. Each of the following interface elements are represented by a `BarProp` object.\n\n- [Window.locationbar]\n - : The browser location bar.\n- [Window.menubar]\n - : The browser menu bar.\n- [Window.personalbar]\n - : The browser personal bar.\n- [Window.scrollbars]\n - : The browser scrollbars.\n- [Window.statusbar]\n - : The browser status bar.\n- [Window.toolbar]\n - : The browser toolbar.\n\nThe `BarProp` interface is not accessed directly, but via one of these elements.", + "properties": { + "visible": "\n\nThe **`visible`** read-only property of the [BarProp] interface returns `true` if the user interface element it represents is visible." + } + }, + "baseaudiocontext": { + "docs": "\n\nThe `BaseAudioContext` interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) acts as a base definition for online and offline audio-processing graphs, as represented by [AudioContext] and [OfflineAudioContext] respectively. You wouldn't use `BaseAudioContext` directly — you'd use its features via one of these two inheriting interfaces.\n\nA `BaseAudioContext` can be a target of events, therefore it implements the [EventTarget] interface.\n\n", + "properties": { + "audioworklet": "\n\nThe `audioWorklet` read-only property of the\n[BaseAudioContext] interface returns an instance of\n[AudioWorklet] that can be used for adding\n[AudioWorkletProcessor]-derived classes which implement custom audio\nprocessing.", + "createanalyser": "\n\nThe `createAnalyser()` method of the\n[BaseAudioContext] interface creates an [AnalyserNode], which\ncan be used to expose audio time and frequency data and create data visualizations.\n\n> **Note:** The [AnalyserNode.AnalyserNode] constructor is the\n> recommended way to create an [AnalyserNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).\n\n> **Note:** For more on using this node, see the\n> [AnalyserNode] page.", + "createbiquadfilter": "\n\nThe `createBiquadFilter()` method of the [BaseAudioContext]\ninterface creates a [BiquadFilterNode], which represents a second order\nfilter configurable as several different common filter types.\n\n> **Note:** The [BiquadFilterNode.BiquadFilterNode] constructor is the\n> recommended way to create a [BiquadFilterNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createbuffer": "\n\nThe `createBuffer()` method of the [BaseAudioContext]\nInterface is used to create a new, empty [AudioBuffer] object, which\ncan then be populated by data, and played via an [AudioBufferSourceNode]\n\nFor more details about audio buffers, check out the [AudioBuffer]\nreference page.\n\n> **Note:** `createBuffer()` used to be able to take compressed\n> data and give back decoded samples, but this ability was removed from the specification,\n> because all the decoding was done on the main thread, so\n> `createBuffer()` was blocking other code execution. The asynchronous method\n> `decodeAudioData()` does the same thing — takes compressed audio, such as an\n> MP3 file, and directly gives you back an [AudioBuffer] that you can\n> then play via an [AudioBufferSourceNode]. For simple use cases\n> like playing an MP3, `decodeAudioData()` is what you should be using.", + "createbuffersource": "\n\nThe `createBufferSource()` method of the [BaseAudioContext]\nInterface is used to create a new [AudioBufferSourceNode], which can be\nused to play audio data contained within an [AudioBuffer] object. [AudioBuffer]s are created using\n[BaseAudioContext.createBuffer] or returned by\n[BaseAudioContext.decodeAudioData] when it successfully decodes an audio\ntrack.\n\n> **Note:** The [AudioBufferSourceNode.AudioBufferSourceNode]\n> constructor is the recommended way to create a [AudioBufferSourceNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createchannelmerger": "\n\nThe `createChannelMerger()` method of the [BaseAudioContext] interface creates a [ChannelMergerNode],\nwhich combines channels from multiple audio streams into a single audio stream.\n\n> **Note:** The [ChannelMergerNode.ChannelMergerNode] constructor is the\n> recommended way to create a [ChannelMergerNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createchannelsplitter": "\n\nThe `createChannelSplitter()` method of the [BaseAudioContext] Interface is used to create a [ChannelSplitterNode],\nwhich is used to access the individual channels of an audio stream and process them separately.\n\n> **Note:** The [ChannelSplitterNode.ChannelSplitterNode]\n> constructor is the recommended way to create a [ChannelSplitterNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createconstantsource": "\n\nThe **`createConstantSource()`**\nproperty of the [BaseAudioContext] interface creates a\n[ConstantSourceNode] object, which is an audio source that continuously\noutputs a monaural (one-channel) sound signal whose samples all have the same\nvalue.\n\n> **Note:** The [ConstantSourceNode.ConstantSourceNode]\n> constructor is the recommended way to create a [ConstantSourceNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createconvolver": "\n\nThe `createConvolver()` method of the [BaseAudioContext]\ninterface creates a [ConvolverNode], which is commonly used to apply\nreverb effects to your audio. See the [spec definition of Convolution](https://webaudio.github.io/web-audio-api/#background-3) for more information.\n\n> **Note:** The [ConvolverNode.ConvolverNode]\n> constructor is the recommended way to create a [ConvolverNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createdelay": "\n\nThe `createDelay()` method of the\n[BaseAudioContext] Interface is used to create a [DelayNode],\nwhich is used to delay the incoming audio signal by a certain amount of time.\n\n> **Note:** The [DelayNode.DelayNode]\n> constructor is the recommended way to create a [DelayNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createdynamicscompressor": "\n\nThe `createDynamicsCompressor()` method of the [BaseAudioContext] Interface is used to create a\n[DynamicsCompressorNode], which can be used to apply compression to an\naudio signal.\n\nCompression lowers the volume of the loudest parts of the signal and raises the volume\nof the softest parts. Overall, a louder, richer, and fuller sound can be achieved. It is\nespecially important in games and musical applications where large numbers of individual\nsounds are played simultaneously, where you want to control the overall signal level and\nhelp avoid clipping (distorting) of the audio output.\n\n> **Note:** The [DynamicsCompressorNode.DynamicsCompressorNode]\n> constructor is the recommended way to create a [DynamicsCompressorNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "creategain": "\n\nThe `createGain()` method of the [BaseAudioContext]\ninterface creates a [GainNode], which can be used to control the\noverall gain (or volume) of the audio graph.\n\n> **Note:** The [GainNode.GainNode]\n> constructor is the recommended way to create a [GainNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createiirfilter": "\n\nThe **`createIIRFilter()`** method of the [BaseAudioContext] interface creates an [IIRFilterNode],\nwhich represents a general **[infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response)** (IIR) filter which can be configured to serve as various types\nof filter.\n\n> **Note:** The [IIRFilterNode.IIRFilterNode]\n> constructor is the recommended way to create a [IIRFilterNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createoscillator": "\n\nThe `createOscillator()` method of the [BaseAudioContext]\ninterface creates an [OscillatorNode], a source representing a periodic\nwaveform. It basically generates a constant tone.\n\n> **Note:** The [OscillatorNode.OscillatorNode]\n> constructor is the recommended way to create a [OscillatorNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createpanner": "\n\nThe `createPanner()` method of the [BaseAudioContext]\nInterface is used to create a new [PannerNode], which is used to\nspatialize an incoming audio stream in 3D space.\n\nThe panner node is spatialized in relation to the AudioContext's\n[AudioListener] (defined by the [BaseAudioContext/listener]\nattribute), which represents the position and orientation of the person listening to the\naudio.\n\n> **Note:** The [PannerNode.PannerNode]\n> constructor is the recommended way to create a [PannerNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createperiodicwave": "\n\nThe `createPeriodicWave()` method of the [BaseAudioContext] Interface\nis used to create a [PeriodicWave], which is used to define a periodic waveform\nthat can be used to shape the output of an [OscillatorNode].", + "createscriptprocessor": "\n\nThe `createScriptProcessor()` method of the [BaseAudioContext] interface\ncreates a [ScriptProcessorNode] used for direct audio processing.\n\n> **Note:** This feature was replaced by [AudioWorklets](/en-US/docs/Web/API/AudioWorklet) and the [AudioWorkletNode] interface.", + "createstereopanner": "\n\nThe `createStereoPanner()` method of the [BaseAudioContext] interface creates a [StereoPannerNode], which can be used to apply\nstereo panning to an audio source.\nIt positions an incoming audio stream in a stereo image using a [low-cost panning algorithm](https://webaudio.github.io/web-audio-api/#stereopanner-algorithm).\n\n> **Note:** The [StereoPannerNode.StereoPannerNode]\n> constructor is the recommended way to create a [StereoPannerNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "createwaveshaper": "\n\nThe `createWaveShaper()` method of the [BaseAudioContext]\ninterface creates a [WaveShaperNode], which represents a non-linear\ndistortion. It is used to apply distortion effects to your audio.\n\n> **Note:** The [WaveShaperNode.WaveShaperNode]\n> constructor is the recommended way to create a [WaveShaperNode]; see\n> [Creating an AudioNode](/en-US/docs/Web/API/AudioNode#creating_an_audionode).", + "currenttime": "\n\nThe `currentTime` read-only property of the [BaseAudioContext]\ninterface returns a double representing an ever-increasing hardware timestamp in seconds that\ncan be used for scheduling audio playback, visualizing timelines, etc. It starts at 0.", + "decodeaudiodata": "\n\nThe `decodeAudioData()` method of the [BaseAudioContext]\nInterface is used to asynchronously decode audio file data contained in an\n`ArrayBuffer` that is loaded from [fetch],\n[XMLHttpRequest], or [FileReader]. The decoded\n[AudioBuffer] is resampled to the [AudioContext]'s sampling\nrate, then passed to a callback or promise.\n\nThis is the preferred method of creating an audio source for Web Audio API from an\naudio track. This method only works on complete file data, not fragments of audio file\ndata.\n\nThis function implements two alternative ways to asynchronously return the audio data or error messages: it returns a `Promise` that fulfills with the audio data, and also accepts callback arguments to handle success or failure. The primary method of interfacing with this function is via its Promise return value, and the callback parameters are provided for legacy reasons.", + "destination": "\n\nThe `destination` property of the [BaseAudioContext]\ninterface returns an [AudioDestinationNode] representing the final\ndestination of all audio in the context. It often represents an actual audio-rendering\ndevice such as your device's speakers.", + "listener": "\n\nThe `listener` property of the [BaseAudioContext] interface\nreturns an [AudioListener] object that can then be used for\nimplementing 3D audio spatialization.", + "samplerate": "\n\nThe `sampleRate` property of the [BaseAudioContext] interface returns a floating point number representing\nthe sample rate, in samples per second, used by all nodes in this audio\ncontext. This limitation means that sample-rate converters are not supported.", + "state": "\n\nThe `state` read-only property of the [BaseAudioContext]\ninterface returns the current state of the `AudioContext`.", + "statechange_event": "\n\nA `statechange` event is fired at a [BaseAudioContext] object when its [BaseAudioContext.state] member changes." + } + }, + "batterymanager": { + "docs": "\n\nThe `BatteryManager` interface of the [Battery Status API] provides information about the system's battery charge level. The [navigator.getBattery] method returns a promise that resolves with a `BatteryManager` interface.\n\nSince Chrome 103, the `BatteryManager` interface of [Battery Status API] only expose to secure context.\n\n", + "properties": { + "charging": "\n\nThe **`BatteryManager.charging`** property is a Boolean value indicating whether or not the device's battery is currently being charged. When its value changes, the [BatteryManager/chargingchange_event] event is fired.\n\nIf the battery is charging or the user agent is unable to report the battery status information, this value is `true`. Otherwise, it is `false`.", + "chargingchange_event": "\n\nThe **`chargingchange`** event of the [Battery Status API] is fired when the battery [BatteryManager.charging] property is updated.", + "chargingtime": "\n\nThe **`BatteryManager.chargingTime`** property indicates the amount of time, in seconds, that remain until the battery is fully charged, or `0` if the battery is already fully charged or the user agent is unable to report the battery status information.\nIf the battery is currently discharging, its value is `Infinity`.\nWhen its value changes, the [BatteryManager/chargingtimechange_event] event is fired.\n\n> **Note:** Even if the time returned is precise to the second,\n> browsers round them to a higher interval\n> (typically to the closest 15 minutes) for privacy reasons.", + "chargingtimechange_event": "\n\nThe **`chargingtimechange`** event of the [Battery Status API] is fired when the battery [BatteryManager.chargingTime] property is updated.", + "dischargingtime": "\n\nThe **`BatteryManager.dischargingTime`** property indicates the amount of time, in seconds, that remains until the battery is fully discharged,\nor `Infinity` if the battery is currently charging rather than discharging or the user agent is unable to report the battery status information.\nWhen its value changes, the [BatteryManager/dischargingtimechange_event] event is fired.\n\n> **Note:** Even if the time returned is precise to the second, browsers round them to a higher\n> interval (typically to the closest 15 minutes) for privacy reasons.", + "dischargingtimechange_event": "\n\nThe **`dischargingtimechange`** event of the [Battery Status API] is fired when the battery [BatteryManager.dischargingTime] property is updated.", + "level": "\n\nThe **`BatteryManager.level`** property indicates the current battery charge level as a value between `0.0` and `1.0`.\nA value of `0.0` means the battery is empty and the system is about to be suspended.\nA value of `1.0` means the battery is full or the user agent is unable to report the battery status information.\nWhen its value changes, the [BatteryManager/levelchange_event] event is fired.", + "levelchange_event": "\n\nThe **`levelchange`** event of the [Battery Status API] is fired when the battery [BatteryManager.level] property is updated." + } + }, + "beforeinstallpromptevent": { + "docs": "\n\nThe **`BeforeInstallPromptEvent`** is the interface of the [Window.beforeinstallprompt_event] event fired at the [Window] object before a user is prompted to \"install\" a website to a home screen on mobile.\n\nThis interface inherits from the [Event] interface.\n\n", + "properties": { + "platforms": "\n\nThe **`platforms`** property of the [BeforeInstallPromptEvent] interface lists the platforms on which the event was dispatched. This is provided for user agents that want to present a choice of versions to the user such as, for example, \"web\" or \"play\" which would allow the user to choose between a web version or an Android version.", + "prompt": "\n\nThe **`prompt()`** method of the [BeforeInstallPromptEvent] interface allows a developer to show the\ninstall prompt at a time of their own choosing. Typically this will be called in the event handler for the app's custom install UI.\n\nThis method must be called in the event handler for a user action (such as a button click) and may only be called once on a given `BeforeInstallPromptEvent` instance.", + "userchoice": "\n\nThe **`userChoice`** property of the [BeforeInstallPromptEvent] interface represents the installation choice that the user made, when they were prompted to install the app." + } + }, + "beforeunloadevent": { + "docs": "\n\nThe **`BeforeUnloadEvent`** interface represents the event object for the [Window/beforeunload_event] event, which is fired when the current window, contained document, and associated resources are about to be unloaded.\n\nSee the [Window/beforeunload_event] event reference for detailed guidance on using this event.\n\n", + "properties": { + "returnvalue": "\n\nThe **`returnValue`** property of the\n[BeforeUnloadEvent] interface, when set to a truthy value, triggers a browser-generated confirmation dialog asking users to confirm if they _really_ want to leave the page when they try to close or reload it, or navigate somewhere else. This is intended to help prevent loss of unsaved data.\n\n> **Note:** `returnValue` is a legacy feature, and best practice is to trigger the dialog by invoking [Event.preventDefault] on the `BeforeUnloadEvent` object, while also setting `returnValue` to support legacy cases. See the [Window/beforeunload_event] event reference for detailed up-to-date guidance." + } + }, + "biquadfilternode": { + "docs": "\n\nThe `BiquadFilterNode` interface represents a simple low-order filter, and is created using the [BaseAudioContext/createBiquadFilter] method. It is an [AudioNode] that can represent different kinds of filters, tone control devices, and graphic equalizers. A `BiquadFilterNode` always has exactly one input and one output.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputs1
Number of outputs1
Channel count mode\"max\"
Channel count2 (not used in the default count mode)
Channel interpretation\"speakers\"
", + "properties": { + "detune": "\n\nThe `detune` property of the [BiquadFilterNode] interface is an [a-rate](/en-US/docs/Web/API/AudioParam#a-rate) [AudioParam] representing detuning of the frequency in [cents](https://en.wikipedia.org/wiki/Cent_%28music%29).", + "frequency": "\n\nThe `frequency` property of the [BiquadFilterNode] interface is an [a-rate](/en-US/docs/Web/API/AudioParam#a-rate) [AudioParam] — a double representing a frequency in the current filtering algorithm measured in hertz (Hz).\n\nIts default value is `350`, with a nominal range of `10` to the [Nyquist frequency](https://en.wikipedia.org/wiki/Nyquist_frequency) — that is, half of the sample rate.", + "gain": "\n\nThe `gain` property of the [BiquadFilterNode] interface is an [a-rate](/en-US/docs/Web/API/AudioParam#a-rate) [AudioParam] — a double representing the [gain](https://en.wikipedia.org/wiki/Gain) used in the current filtering algorithm.\n\nWhen its value is positive, it represents a real gain; when negative, it represents an attenuation.\n\nIt is expressed in dB, has a default value of `0`, and can take a value in a nominal range of `-40` to `40`.", + "getfrequencyresponse": "\n\nThe `getFrequencyResponse()` method of the [BiquadFilterNode] interface takes the current filtering algorithm's settings and calculates the\nfrequency response for frequencies specified in a specified array of frequencies.\n\nThe two output arrays, `magResponseOutput` and\n`phaseResponseOutput`, must be created before calling this method; they\nmust be the same size as the array of input frequency values\n(`frequencyArray`).", + "q": "\n\nThe `Q` property of the [BiquadFilterNode] interface is an [a-rate](/en-US/docs/Web/API/AudioParam#a-rate) [AudioParam], a double representing a [Q factor](https://en.wikipedia.org/wiki/Q_factor), or _quality factor_.\n\nIt is a dimensionless value with a default value of `1` and a nominal range of `0.0001` to `1000`.", + "type": "\n\nThe `type` property of the [BiquadFilterNode] interface is a string (enum) value defining the kind of filtering algorithm the node is implementing." + } + }, + "blob": { + "docs": "\n\nThe **`Blob`** object represents a blob, which is a file-like object of immutable, raw data; they can be read as text or binary data, or converted into a [ReadableStream] so its methods can be used for processing the data.\n\nBlobs can represent data that isn't necessarily in a JavaScript-native format. The [File] interface is based on `Blob`, inheriting blob functionality and expanding it to support files on the user's system.", + "properties": { + "arraybuffer": "\n\nThe **`arrayBuffer()`** method of the [Blob]\ninterface returns a `Promise` that resolves with the contents of the blob as\nbinary data contained in an `ArrayBuffer`.", + "size": "\n\nThe **`size`** read-only property of the [Blob] interface returns\nthe size of the [Blob] or [File] in bytes.", + "slice": "\n\nThe **`slice()`** method of the [Blob] interface\ncreates and returns a new `Blob` object which contains data from a subset of\nthe blob on which it's called.", + "stream": "\n\nThe **`stream()`** method of the [Blob] interface returns a [ReadableStream] which upon reading returns the data contained within the `Blob`.", + "text": "\n\nThe **`text()`** method of the\n[Blob] interface returns a `Promise` that resolves with a\nstring containing the contents of the blob, interpreted as UTF-8.", + "type": "\n\nThe **`type`** read-only property of the [Blob] interface returns the of the file.\n\n> **Note:** Based on the current implementation, browsers won't actually read the bytestream of a file to determine its media type.\n> It is assumed based on the file extension; a PNG image file renamed to .txt would give \"_text/plain_\" and not \"_image/png_\". Moreover, `blob.type` is generally reliable only for common file types like images, HTML documents, audio and video.\n> Uncommon file extensions would return an empty string.\n> Client configuration (for instance, the Windows Registry) may result in unexpected values even for common types. **Developers are advised not to rely on this property as a sole validation scheme.**" + } + }, + "blobevent": { + "docs": "\n\nThe **`BlobEvent`** interface of the [MediaStream Recording API](/en-US/docs/Web/API/MediaStream_Recording_API) represents events associated with a [Blob]. These blobs are typically, but not necessarily, associated with media content.\n\n", + "properties": { + "data": "\n\nThe **`data`** read-only property of the [BlobEvent] interface represents a [Blob] associated with the event.", + "timecode": "\n\nThe **`timecode`** read-only property of the [BlobEvent] interface indicates the difference between the timestamp of the first chunk of data, and the timestamp of the first chunk in the first `BlobEvent` produced by this recorder.\n\nNote that the `timecode` in the first produced `BlobEvent` does not need to be zero." + } + }, + "bluetooth": { + "docs": "\n\nThe **`Bluetooth`** interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) returns a\n`Promise` to a [BluetoothDevice] object with the specified\noptions.\n\n", + "properties": { + "getavailability": "\n\nThe **`getAvailability()`** method of the [Bluetooth] interface returns `true` if the device has a Bluetooth adapter, and false otherwise (unless the user has configured the browser to not expose a real value).\n\n> **Note:** A user might not allow use of Web Bluetooth API, even if\n> `getAvailability()` returns `true`\n> ([Bluetooth.requestDevice] might\n> not resolve with a [BluetoothDevice]). Also, a user can configure their browser to return a fixed value instead of a real one.", + "getdevices": "\n\nThe **`getDevices()`** method of\n[Bluetooth] interface of [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) exposes the\nBluetooth devices this origin is allowed to access. This method does not display any\npermission prompts.\n\n> **Note:** This method returns a [BluetoothDevice] for each\n> device the origin is currently allowed to access, even the ones that are out of range\n> or powered off.", + "requestdevice": " \n\nThe **`Bluetooth.requestDevice()`** method of the\n[Bluetooth] interface returns a `Promise` to a\n[BluetoothDevice] object with the specified options. If there is no chooser\nUI, this method returns the first device matching the criteria." + } + }, + "bluetoothcharacteristicproperties": { + "docs": "\n\nThe **`BluetoothCharacteristicProperties`** interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) provides the operations that are valid on the given [BluetoothRemoteGATTCharacteristic].\n\nThis interface is returned by calling [BluetoothRemoteGATTCharacteristic.properties].", + "properties": { + "authenticatedsignedwrites": "\n\nThe **`authenticatedSignedWrites`** read-only\nproperty of the [BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if signed writing to the characteristic\nvalue is permitted.", + "broadcast": "\n\nThe **`broadcast`** read-only property of the\n[BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if the broadcast of the characteristic\nvalue is permitted using the Server Characteristic Configuration Descriptor.", + "indicate": "\n\nThe **`indicate`** read-only property of the\n[BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if indications of the characteristic\nvalue with acknowledgement is permitted.", + "notify": "\n\nThe **`notify`** read-only property of the\n[BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if notifications of the characteristic\nvalue without acknowledgement is permitted.", + "read": "\n\nThe **`read`** read-only property of the\n[BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if the reading of the characteristic\nvalue is permitted.", + "reliablewrite": "\n\nThe **`reliableWrite`** read-only property of\nthe [BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if reliable writes to the characteristic\nis permitted.", + "writableauxiliaries": "\n\nThe **`writableAuxiliaries`** read-only\nproperty of the [BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if reliable writes to the characteristic\ndescriptor is permitted.", + "write": "\n\nThe **`write`** read-only property of the\n[BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if the writing to the characteristic with\nresponse is permitted.", + "writewithoutresponse": "\n\nThe **`writeWithoutResponse`** read-only\nproperty of the [BluetoothCharacteristicProperties] interface returns a\n`boolean` that is `true` if the writing to the characteristic\nwithout response is permitted." + } + }, + "bluetoothdevice": { + "docs": "\n\nThe BluetoothDevice interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) represents a Bluetooth device inside a particular script execution\nenvironment.\n\n", + "properties": { + "gatt": "\n\nThe\n**`BluetoothDevice.gatt`** read-only property returns\na reference to the device's [BluetoothRemoteGATTServer].", + "id": "\n\nThe **`BluetoothDevice.id`** read-only property returns a\nstring that uniquely identifies a device.", + "name": "\n\nThe **`BluetoothDevice.name`** read-only property returns a\nstring that provides a human-readable name for the device." + } + }, + "bluetoothremotegattcharacteristic": { + "docs": "\n\nThe `BluetoothRemoteGattCharacteristic` interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) represents a GATT Characteristic, which is a basic data element that provides further information about a peripheral's service.\n\n", + "properties": { + "getdescriptor": "\n\nThe **`BluetoothRemoteGATTCharacteristic.getDescriptor()`** method\nreturns a `Promise` that resolves to the\nfirst [BluetoothRemoteGATTDescriptor] for a given descriptor UUID.", + "getdescriptors": "\n\nThe **`BluetoothRemoteGATTCharacteristic.getDescriptors()`** method\nreturns a `Promise` that resolves to an `Array` of all\n[BluetoothRemoteGATTDescriptor] objects for a given descriptor UUID.", + "properties": "\n\nThe **`BluetoothRemoteGATTCharacteristic.properties`**\nread-only property returns a [BluetoothCharacteristicProperties] instance\ncontaining the properties of this characteristic.", + "readvalue": "\n\nThe **`BluetoothRemoteGATTCharacteristic.readValue()`** method\nreturns a `Promise` that resolves to a `DataView` holding a\nduplicate of the `value` property if it is available and supported. Otherwise\nit throws an error.", + "service": "\n\nThe **`BluetoothRemoteGATTCharacteristic.service`** read-only\nproperty returns the [BluetoothRemoteGATTService] this characteristic belongs to.", + "startnotifications": "\n\nThe **`BluetoothRemoteGATTCharacteristic.startNotifications()`** method\nreturns a `Promise` to the BluetoothRemoteGATTCharacteristic instance when\nthere is an active notification on it.", + "stopnotifications": "\n\nThe **`BluetoothRemoteGATTCharacteristic.stopNotifications()`** method\nreturns a `Promise` to the BluetoothRemoteGATTCharacteristic instance when\nthere is no longer an active notification on it.", + "uuid": "\n\nThe **`BluetoothRemoteGATTCharacteristic.uuid`** read-only\nproperty returns a string containing the UUID of the characteristic, for\nexample `'00002a37-0000-1000-8000-00805f9b34fb'` for the Heart Rate\nMeasurement characteristic.", + "value": "\n\nThe **`BluetoothRemoteGATTCharacteristic.value`** read-only\nproperty returns currently cached characteristic value. This value gets updated when the\nvalue of the characteristic is read or updated via a notification or indication.", + "writevalue": "\n\nUse [BluetoothRemoteGATTCharacteristic.writeValueWithResponse] and [BluetoothRemoteGATTCharacteristic.writeValueWithoutResponse] instead.\n\nThe **`BluetoothRemoteGATTCharacteristic.writeValue()`** method sets a [BluetoothRemoteGATTCharacteristic] object's `value` property to the bytes contained in a given `ArrayBuffer`, calls [`WriteCharacteristicValue`(_this_=`this`, _value=value_, _response_=`\"optional\"`)](https://webbluetoothcg.github.io/web-bluetooth/#writecharacteristicvalue), and returns the resulting `Promise`.", + "writevaluewithoutresponse": "\n\nThe **`BluetoothRemoteGATTCharacteristic.writeValueWithoutResponse()`** method sets a [BluetoothRemoteGATTCharacteristic] object's `value` property to the bytes contained in a given `ArrayBuffer`, calls [`WriteCharacteristicValue`(_this_=`this`, _value=value_, _response_=`\"never\"`)](https://webbluetoothcg.github.io/web-bluetooth/#writecharacteristicvalue), and returns the resulting `Promise`.", + "writevaluewithresponse": "\n\nThe **`BluetoothRemoteGATTCharacteristic.writeValueWithResponse()`** method sets a [BluetoothRemoteGATTCharacteristic] object's `value` property to the bytes contained in a given `ArrayBuffer`, calls [`WriteCharacteristicValue`(_this_=`this`, _value=value_, _response_=`\"required\"`)](https://webbluetoothcg.github.io/web-bluetooth/#writecharacteristicvalue), and returns the resulting `Promise`." + } + }, + "bluetoothremotegattdescriptor": { + "docs": "\n\nThe `BluetoothRemoteGATTDescriptor` interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) provides a GATT Descriptor,\nwhich provides further information about a characteristic's value.", + "properties": { + "characteristic": "\n\nThe **`BluetoothRemoteGATTDescriptor.characteristic`**\nread-only property returns the [BluetoothRemoteGATTCharacteristic] this\ndescriptor belongs to.", + "readvalue": "\n\nThe\n**`BluetoothRemoteGATTDescriptor.readValue()`**\nmethod returns a `Promise` that resolves to\nan `ArrayBuffer` holding a duplicate of the `value` property if\nit is available and supported. Otherwise it throws an error.", + "uuid": "\n\nThe **`BluetoothRemoteGATTDescriptor.uuid`** read-only property returns the of the characteristic descriptor.\nFor example '`00002902-0000-1000-8000-00805f9b34fb`' for theClient Characteristic Configuration descriptor.", + "value": "\n\nThe **`BluetoothRemoteGATTDescriptor.value`**\nread-only property returns an `ArrayBuffer` containing the currently cached\ndescriptor value. This value gets updated when the value of the descriptor is read.", + "writevalue": "\n\nThe **`BluetoothRemoteGATTDescriptor.writeValue()`**\nmethod sets the value property to the bytes contained in\nan `ArrayBuffer` and returns a `Promise`." + } + }, + "bluetoothremotegattserver": { + "docs": "\n\nThe **`BluetoothRemoteGATTServer`** interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) represents a GATT\nServer on a remote device.", + "properties": { + "connect": "\n\nThe\n**`BluetoothRemoteGATTServer.connect()`** method causes the\nscript execution environment to connect to `this.device`.", + "connected": "\n\nThe **`BluetoothRemoteGATTServer.connected`** read-only\nproperty returns a boolean value that returns true while this script execution\nenvironment is connected to `this.device`. It can be false while the user\nagent is physically connected.", + "device": "\n\nThe **`BluetoothRemoteGATTServer.device`** read-only property\nreturns a reference to the [BluetoothDevice] running the server.", + "disconnect": "\n\nThe **`BluetoothRemoteGATTServer.disconnect()`** method causes\nthe script execution environment to disconnect from `this.device`.", + "getprimaryservice": "\n\nThe **`BluetoothRemoteGATTServer.getPrimaryService()`** method\nreturns a promise to the primary [BluetoothRemoteGATTService] offered by the\nBluetooth device for a specified bluetooth service UUID.", + "getprimaryservices": "\n\nThe **BluetoothRemoteGATTServer.getPrimaryServices()** method returns a\npromise to a list of primary [BluetoothRemoteGATTService] objects offered by the\nBluetooth device for a specified `BluetoothServiceUUID`." + } + }, + "bluetoothremotegattservice": { + "docs": "\n\nThe `BluetoothRemoteGATTService` interface of the [Web Bluetooth API](/en-US/docs/Web/API/Web_Bluetooth_API) represents a\nservice provided by a GATT server, including a device, a list of referenced services,\nand a list of the characteristics of this service.\n\n", + "properties": { + "device": "\n\nThe **`BluetoothGATTService.device`** read-only property\nreturns information about a Bluetooth device through an instance of\n[BluetoothDevice].", + "getcharacteristic": "\n\nThe **`BluetoothGATTService.getCharacteristic()`** method\nreturns a `Promise` to an instance of\n[BluetoothRemoteGATTCharacteristic] for a given universally unique identifier\n(UUID).", + "getcharacteristics": "\n\nThe **`BluetoothGATTService.getCharacteristics()`** method\nreturns a `Promise` to a list of [BluetoothRemoteGATTCharacteristic]\ninstances for a given universally unique identifier (UUID).", + "isprimary": "\n\nThe **`BluetoothGATTService.isPrimary`** read-only property\nreturns a boolean value that indicates whether this is a primary service. If it\nis not a primary service, it is a secondary service.", + "uuid": "\n\nThe **`BluetoothGATTService.uuid`** read-only property\nreturns a string representing the UUID of this service." + } + }, + "bluetoothuuid": { + "docs": "\n\nThe **`BluetoothUUID`** interface of the [Web Bluetooth API] provides a way to look up Universally Unique Identifier (UUID) values by name in the\n[registry](https://www.bluetooth.com/specifications/assigned-numbers/) maintained by the Bluetooth SIG.", + "properties": { + "canonicaluuid_static": "\n\nThe **`canonicalUUID()`** static method of the [BluetoothUUID] interface returns the 128-bit UUID when passed a 16- or 32-bit UUID alias.", + "getcharacteristic_static": "\n\nThe **`getCharacteristic()`** static method of the [BluetoothUUID] interface returns a UUID representing a registered characteristic when passed a name or the 16- or 32-bit UUID alias.", + "getdescriptor_static": "\n\nThe **`getDescriptor()`** static method of the [BluetoothUUID] interface returns a UUID representing a registered descriptor when passed a name or the 16- or 32-bit UUID alias.", + "getservice_static": "\n\nThe **`getService()`** static method of the [BluetoothUUID] interface returns a UUID representing a registered service when passed a name or the 16- or 32-bit UUID alias." + } + }, + "broadcastchannel": { + "docs": "\n\nThe **`BroadcastChannel`** interface represents a named channel that any of a given can subscribe to. It allows communication between different documents (in different windows, tabs, frames or iframes) of the same origin. Messages are broadcasted via a [BroadcastChannel/message_event] event fired at all `BroadcastChannel` objects listening to the channel, except the object that sent the message.\n\n", + "properties": { + "close": "\n\nThe **`BroadcastChannel.close()`** terminates the connection to\nthe underlying channel, allowing the object to be garbage collected.\nThis is a necessary step to perform\nas there is no other way for a browser to know\nthat this channel is not needed anymore.\n\n", + "message_event": "\n\nThe `message` event is fired on a [BroadcastChannel] object when a message arrives on that channel.", + "messageerror_event": "\n\nThe `messageerror` event is fired on a [BroadcastChannel] object when a message that can't be deserialized arrives on the channel.", + "name": "\n\nThe read-only **`BroadcastChannel.name`** property returns a string, which uniquely identifies the given channel with its name. This name is passed to the [BroadcastChannel.BroadCastChannel] constructor at creation time and is therefore read-only.\n\n", + "postmessage": "\n\nThe **`BroadcastChannel.postMessage()`** sends a message,\nwhich can be of any kind of `Object`,\nto each listener in any with the same .\nThe message is transmitted as a ['message'](/en-US/docs/Web/API/BroadcastChannel/message_event) event\ntargeted at each [BroadcastChannel] bound to the channel.\n\n" + } + }, + "bytelengthqueuingstrategy": { + "docs": "\n\nThe **`ByteLengthQueuingStrategy`** interface of the [Streams API](/en-US/docs/Web/API/Streams_API) provides a built-in byte length queuing strategy that can be used when constructing streams.", + "properties": { + "highwatermark": "\n\nThe read-only **`ByteLengthQueuingStrategy.highWaterMark`** property returns the total number of bytes that can be contained in the internal queue before [backpressure](/en-US/docs/Web/API/Streams_API/Concepts#backpressure) is applied.\n\n> **Note:** Unlike [`CountQueuingStrategy()`](/en-US/docs/Web/API/CountQueuingStrategy/CountQueuingStrategy) where the `highWaterMark` property specifies a simple count of the number of chunks, with `ByteLengthQueuingStrategy()`, the `highWaterMark` parameter specifies a number of _bytes_ — specifically, given a stream of chunks, how many bytes worth of those chunks (rather than a count of how many of those chunks) can be contained in the internal queue before backpressure is applied.", + "size": "\n\nThe **`size()`** method of the\n[ByteLengthQueuingStrategy] interface returns the given chunk's\n`byteLength` property." + } + }, + "cache": { + "docs": "\n\nThe **`Cache`** interface provides a persistent storage mechanism for [Request] / [Response] object pairs that are cached in long lived memory. How long a `Cache` object lives is browser dependent, but a single origin's scripts can typically rely on the presence of a previously populated `Cache` object. Note that the `Cache` interface is exposed to windowed scopes as well as workers. You don't have to use it in conjunction with service workers, even though it is defined in the service worker spec.\n\nAn origin can have multiple, named `Cache` objects. You are responsible for implementing how your script (e.g. in a [ServiceWorker]) handles `Cache` updates. Items in a `Cache` do not get updated unless explicitly requested; they don't expire unless deleted. Use [CacheStorage.open] to open a specific named `Cache` object and then call any of the `Cache` methods to maintain the `Cache`.\n\nYou are also responsible for periodically purging cache entries. Each browser has a hard limit on the amount of cache storage that a given origin can use. `Cache` quota usage estimates are available via the [StorageManager.estimate] method. The browser does its best to manage disk space, but it may delete the `Cache` storage for an origin. The browser will generally delete all of the data for an origin or none of the data for an origin. Make sure to version caches by name and use the caches only from the version of the script that they can safely operate on. See [Deleting old caches](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers#deleting_old_caches) for more information.\n\n> **Note:** The key matching algorithm depends on the [VARY header](https://www.fastly.com/blog/best-practices-using-vary-header) in the value. So matching a new key requires looking at both key and value for entries in the `Cache` object.\n\n> **Note:** The caching API doesn't honor HTTP caching headers.\n\n", + "properties": { + "add": "\n\nThe **`add()`** method of the [Cache] interface takes a URL, retrieves it, and adds the resulting response object to the given cache.\n\nThe `add()` method is functionally equivalent to the following:\n\n```js\nfetch(url).then((response) => {\n if (!response.ok) {\n throw new TypeError(\"bad response status\");\n }\n return cache.add(url);\n});\n```\n\nFor more complex operations, you'll need to use [Cache.put] directly.\n\n> **Note:** `add()` will overwrite any key/value pair previously stored in the cache that matches the request.", + "addall": "\n\nThe **`addAll()`** method of the [Cache] interface takes an array of URLs, retrieves them, and adds the resulting response objects to the given cache. The request objects created during retrieval become keys to the stored response operations.\n\n> **Note:** `addAll()` will overwrite any key/value pairs\n> previously stored in the cache that match the request, but will fail if a\n> resulting `put()` operation would overwrite a previous cache entry stored by the same `addAll()` method.", + "delete": "\n\nThe **`delete()`** method of the [Cache] interface finds the [Cache] entry whose key is the request, and if found, deletes the [Cache] entry and returns a `Promise` that resolves to `true`.\nIf no [Cache] entry is found, it resolves to `false`.", + "keys": "\n\nThe **`keys()`** method of the [Cache] interface returns a\n`Promise` that resolves to an array of [Request] objects\nrepresenting the keys of the [Cache].\n\nThe requests are returned in the same order that they were inserted.\n\n> **Note:** Requests with duplicate URLs but different headers can be\n> returned if their responses have the `VARY` header set on them.", + "match": "\n\nThe **`match()`** method of the [Cache] interface returns a `Promise` that resolves to the [Response] associated with the first matching request in the [Cache] object.\nIf no match is found, the `Promise` resolves to `undefined`.", + "matchall": "\n\nThe **`matchAll()`** method of the [Cache]\ninterface returns a `Promise` that resolves to an array of all matching\nresponses in the [Cache] object.", + "put": "\n\nThe **`put()`** method of the\n[Cache] interface allows key/value pairs to be added to the current\n[Cache] object.\n\nOften, you will just want to [fetch]\none or more requests, then add the result straight to your cache. In such cases you are\nbetter off using\n[Cache.add]/[Cache.addAll], as\nthey are shorthand functions for one or more of these operations.\n\n```js\nfetch(url).then((response) => {\n if (!response.ok) {\n throw new TypeError(\"Bad response status\");\n }\n return cache.put(url, response);\n});\n```\n\n> **Note:** `put()` will overwrite any key/value pair\n> previously stored in the cache that matches the request.\n\n> **Note:** [Cache.add]/[Cache.addAll] do not\n> cache responses with `Response.status` values that are not in the 200\n> range, whereas [Cache.put] lets you store any request/response pair. As a\n> result, [Cache.add]/[Cache.addAll] can't be used to store\n> opaque responses, whereas [Cache.put] can." + } + }, + "cachestorage": { + "docs": "\n\nThe **`CacheStorage`** interface represents the storage for [Cache] objects.\n\nThe interface:\n\n- Provides a master directory of all the named caches that can be accessed by a [ServiceWorker] or other type of worker or [window] scope (you're not limited to only using it with service workers).\n- Maintains a mapping of string names to corresponding [Cache] objects.\n\nUse [CacheStorage.open] to obtain a [Cache] instance.\n\nUse [CacheStorage.match] to check if a given [Request] is a key in any of the [Cache] objects that the `CacheStorage` object tracks.\n\nYou can access `CacheStorage` through the global [caches] property.\n\n> **Note:** `CacheStorage` always rejects with a `SecurityError` on untrusted origins (i.e. those that aren't using HTTPS, although this definition will likely become more complex in the future.) When testing on Firefox, you can get around this by checking the **Enable Service Workers over HTTP (when toolbox is open)** option in the Firefox Devtools options/gear menu. Furthermore, because `CacheStorage` requires file-system access, it may be unavailable in private mode in Firefox.\n\n> **Note:** [CacheStorage.match] is a convenience method. Equivalent functionality to match a cache entry can be implemented by returning an array of cache names from [CacheStorage.keys], opening each cache with [CacheStorage.open], and matching the one you want with [Cache.match].\n\n", + "properties": { + "delete": "\n\nThe **`delete()`** method of the [CacheStorage] interface finds the [Cache] object matching the `cacheName`, and if found, deletes the [Cache] object and returns a `Promise` that resolves to `true`.\nIf no [Cache] object is found, it resolves to `false`.\n\nYou can access `CacheStorage` through the global [caches] property.", + "has": "\n\nThe **`has()`** method of the [CacheStorage]\ninterface returns a `Promise` that resolves to `true` if a\n[Cache] object matches the `cacheName`.\n\nYou can access `CacheStorage` through the global [caches] property.", + "keys": "\n\nThe **`keys()`** method of the [CacheStorage] interface returns a `Promise` that will resolve with an array containing strings corresponding to all of the named [Cache] objects tracked by the [CacheStorage] object in the order they were created.\nUse this method to iterate over a list of all [Cache] objects.\n\nYou can access `CacheStorage` through the global [caches] property.", + "match": "\n\nThe **`match()`** method of the [CacheStorage] interface checks if a given [Request] or URL string is a key for a stored [Response].\nThis method returns a `Promise` for a [Response], or a `Promise` which resolves to `undefined` if no match is found.\n\nYou can access `CacheStorage` through the global\n[caches] property.\n\n`Cache` objects are searched in creation order.\n\n> **Note:** [CacheStorage.match] is a convenience method.\n> Equivalent functionality is to call [cache.match] on each cache (in the order returned by [CacheStorage.keys]) until a [Response] is returned.", + "open": "\n\nThe **`open()`** method of the\n[CacheStorage] interface returns a `Promise` that resolves to\nthe [Cache] object matching the `cacheName`.\n\nYou can access `CacheStorage` through the global\n[caches] property.\n\n> **Note:** If the specified [Cache] does not exist, a new\n> cache is created with that `cacheName` and a `Promise` that\n> resolves to this new [Cache] object is returned." + } + }, + "canmakepaymentevent": { + "docs": "\n\nThe **`CanMakePaymentEvent`** interface of the [Payment Handler API] is the event object for the [ServiceWorkerGlobalScope.canmakepayment_event] event, fired on a payment app's service worker to check whether it is ready to handle a payment. Specifically, it is fired when the merchant website calls [PaymentRequest.PaymentRequest].\n\n", + "properties": { + "respondwith": "\n\nThe **`respondWith()`** method of the [CanMakePaymentEvent] interface enables the service worker to respond appropriately to signal whether it is ready to handle payments." + } + }, + "canvascapturemediastreamtrack": { + "docs": "\n\nThe **`CanvasCaptureMediaStreamTrack`** interface of the [Media Capture and Streams API] represents the video track contained in a [MediaStream] being generated from a `canvas` following a call to [HTMLCanvasElement.captureStream].\n\n", + "properties": { + "canvas": "\n\nThe **`canvas`** read-only property of the [CanvasCaptureMediaStreamTrack] interface returns the [HTMLCanvasElement] from which frames are being captured.", + "requestframe": "\n\nThe **`requestFrame()`** method of the [CanvasCaptureMediaStreamTrack] interface requests that a frame be captured from the canvas and sent to the stream.\n\nApplications that need to carefully control\nthe timing of rendering and frame capture can use `requestFrame()` to\ndirectly specify when it's time to capture a frame.\n\nTo prevent automatic capture of frames, so that frames are only captured when\n`requestFrame()` is called, specify a value of 0 for the\n[HTMLCanvasElement.captureStream] method when creating\nthe stream." + } + }, + "canvasgradient": { + "docs": "\n\nThe **`CanvasGradient`** interface represents an [opaque object](https://en.wikipedia.org/wiki/Opaque_data_type) describing a gradient. It is returned by the methods [CanvasRenderingContext2D.createLinearGradient], [CanvasRenderingContext2D.createConicGradient] or [CanvasRenderingContext2D.createRadialGradient].\n\nIt can be used as a [CanvasRenderingContext2D.fillStyle] or [CanvasRenderingContext2D.strokeStyle].", + "properties": { + "addcolorstop": "\n\nThe **`CanvasGradient.addColorStop()`** method adds a new color stop,\ndefined by an `offset` and a `color`, to a given canvas gradient." + } + }, + "canvaspattern": { + "docs": "\n\nThe **`CanvasPattern`** interface represents an [opaque object](https://en.wikipedia.org/wiki/Opaque_data_type) describing a pattern, based on an image, a canvas, or a video, created by the [CanvasRenderingContext2D.createPattern] method.\n\nIt can be used as a [CanvasRenderingContext2D.fillStyle] or [CanvasRenderingContext2D.strokeStyle].", + "properties": { + "settransform": "\n\nThe **`CanvasPattern.setTransform()`** method uses a [DOMMatrix] object as the pattern's transformation matrix and invokes it on the pattern." + } + }, + "canvasrenderingcontext2d": { + "docs": "\n\nThe **`CanvasRenderingContext2D`** interface, part of the [Canvas API](/en-US/docs/Web/API/Canvas_API), provides the 2D rendering context for the drawing surface of a `canvas` element.\nIt is used for drawing shapes, text, images, and other objects.\n\nThe interface's properties and methods are described in the reference section of this page.\nThe [Canvas tutorial](/en-US/docs/Web/API/Canvas_API/Tutorial) has more explanation, examples, and resources, as well.\n\nFor [`OffscreenCanvas`](/en-US/docs/Web/API/OffscreenCanvas), there is an equivalent interface that provides the rendering context.\nThe offscreen rendering context inherits most of the same properties and methods as the `CanvasRenderingContext2D` and is described in more detail in the [OffscreenCanvasRenderingContext2D] reference page.", + "properties": { + "arc": "\n\nThe\n**`CanvasRenderingContext2D.arc()`**\nmethod of the [Canvas 2D API](/en-US/docs/Web/API/CanvasRenderingContext2D) adds a circular arc to the current sub-path.", + "arcto": "\n\nThe **`CanvasRenderingContext2D.arcTo()`** method of the Canvas 2D API adds a circular arc to the current sub-path, using the given control points and radius.\nThe arc is automatically connected to the path's latest point with a straight line if necessary, for example if the starting point and control points are in a line.\n\nThis method is commonly used for making rounded corners.\n\n> **Note:** You may get unexpected results when using a\n> relatively large radius: the arc's connecting line will go in whatever direction it\n> must to meet the specified radius.", + "beginpath": "\n\nThe\n**`CanvasRenderingContext2D.beginPath()`**\nmethod of the Canvas 2D API starts a new path by emptying the list of sub-paths. Call\nthis method when you want to create a new path.\n\n> **Note:** To create a new sub-path, i.e., one matching the current\n> canvas state, you can use [CanvasRenderingContext2D.moveTo].", + "beziercurveto": "\n\nThe\n**`CanvasRenderingContext2D.bezierCurveTo()`**\nmethod of the Canvas 2D API adds a cubic [Bézier curve](/en-US/docs/Glossary/Bezier_curve) to the current\nsub-path. It requires three points: the first two are control points and the third one\nis the end point. The starting point is the latest point in the current path, which can\nbe changed using [CanvasRenderingContext2D.moveTo] before\ncreating the Bézier curve.", + "canvas": "\n\nThe **`CanvasRenderingContext2D.canvas`** property, part of the\n[Canvas API](/en-US/docs/Web/API/Canvas_API), is a read-only reference to the\n[HTMLCanvasElement] object that is associated with a given context. It\nmight be [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null) if there is no associated `canvas` element.", + "clearrect": "\n\nThe\n**`CanvasRenderingContext2D.clearRect()`**\nmethod of the Canvas 2D API erases the pixels in a rectangular area by setting them to\ntransparent black.\n\n> **Note:** Be aware that `clearRect()` may cause unintended\n> side effects if you're not [using paths properly](/en-US/docs/Web/API/Canvas_API/Tutorial/Drawing_shapes#drawing_paths). Make sure to call\n> [CanvasRenderingContext2D.beginPath] before starting to\n> draw new items after calling `clearRect()`.", + "clip": "\n\nThe\n**`CanvasRenderingContext2D.clip()`**\nmethod of the Canvas 2D API turns the current or given path into the current clipping\nregion. The previous clipping region, if any, is intersected with the current or given\npath to create the new clipping region.\n\nIn the image below, the red outline represents a clipping region shaped like a star.\nOnly those parts of the checkerboard pattern that are within the clipping region get\ndrawn.\n\n![Star-shaped clipping region](canvas_clipping_path.png)\n\n> **Note:** Be aware that the clipping region is only constructed from\n> shapes added to the path. It doesn't work with shape primitives drawn directly to the\n> canvas, such as [CanvasRenderingContext2D.fillRect].\n> Instead, you'd have to use [CanvasRenderingContext2D.rect] to\n> add a rectangular shape to the path before calling `clip()`.\n\n> **Note:** Clip paths cannot be reverted directly. You must save your canvas state using [CanvasRenderingContext2D/save] before calling `clip()`, and restore it once you have finished drawing in the clipped area using [CanvasRenderingContext2D/restore].", + "closepath": "\n\nThe\n**`CanvasRenderingContext2D.closePath()`**\nmethod of the Canvas 2D API attempts to add a straight line from the current point to\nthe start of the current sub-path. If the shape has already been closed or has only one\npoint, this function does nothing.\n\nThis method doesn't draw anything to the canvas directly. You can render the path using\nthe [CanvasRenderingContext2D.stroke] or\n[CanvasRenderingContext2D.fill] methods.", + "createconicgradient": "\n\nThe **`CanvasRenderingContext2D.createConicGradient()`** method of the Canvas 2D API creates a gradient around a point with given coordinates.\n\nThis method returns a conic [CanvasGradient]. To be applied to a shape, the gradient must first be assigned to the [CanvasRenderingContext2D.fillStyle] or [CanvasRenderingContext2D.strokeStyle] properties.\n\n> **Note:** Gradient coordinates are global, i.e., relative to the current coordinate space. When applied to a shape, the coordinates are NOT relative to the shape's coordinates.", + "createimagedata": "\n\nThe **`CanvasRenderingContext2D.createImageData()`** method of\nthe Canvas 2D API creates a new, blank [ImageData] object with the\nspecified dimensions. All of the pixels in the new object are transparent black.", + "createlineargradient": "\n\nThe\n**`CanvasRenderingContext2D.createLinearGradient()`**\nmethod of the Canvas 2D API creates a gradient along the line connecting two given\ncoordinates.\n\n![The gradient transitions colors along the gradient line, starting at point x0, y0 and going to x1, y1, even if those points extend the gradient line beyond the edges of the element on which the gradient is drawn.](mdn-canvas-lineargradient.png)\n\nThis method returns a linear [CanvasGradient]. To be applied to a shape,\nthe gradient must first be assigned to the\n[CanvasRenderingContext2D.fillStyle] or\n[CanvasRenderingContext2D.strokeStyle] properties.\n\n> **Note:** Gradient coordinates are global, i.e., relative to the current\n> coordinate space. When applied to a shape, the coordinates are NOT relative to the\n> shape's coordinates.", + "createpattern": "\n\nThe **`CanvasRenderingContext2D.createPattern()`** method of the Canvas 2D API creates a pattern using the specified image and repetition.\nThis method returns a [CanvasPattern].\n\nThis method doesn't draw anything to the canvas directly.\nThe pattern it creates must be assigned to the [CanvasRenderingContext2D.fillStyle] or [CanvasRenderingContext2D.strokeStyle] properties, after which it is applied to any subsequent drawing.", + "createradialgradient": "\n\nThe\n**`CanvasRenderingContext2D.createRadialGradient()`**\nmethod of the Canvas 2D API creates a radial gradient using the size and coordinates of\ntwo circles.\n\nThis method returns a [CanvasGradient]. To be applied to a shape, the\ngradient must first be assigned to the [CanvasRenderingContext2D.fillStyle] or [CanvasRenderingContext2D.strokeStyle]\nproperties.\n\n> **Note:** Gradient coordinates are global, i.e., relative to the current\n> coordinate space. When applied to a shape, the coordinates are NOT relative to the\n> shape's coordinates.", + "direction": "\n\nThe\n**`CanvasRenderingContext2D.direction`**\nproperty of the Canvas 2D API specifies the current text direction used to draw text.", + "drawfocusifneeded": "\n\nThe\n**`CanvasRenderingContext2D.drawFocusIfNeeded()`**\nmethod of the Canvas 2D API draws a focus ring around the current or given path, if the\nspecified element is focused.", + "drawimage": "\n\nThe **`CanvasRenderingContext2D.drawImage()`** method of the\nCanvas 2D API provides different ways to draw an image onto the canvas.", + "ellipse": "\n\nThe\n**`CanvasRenderingContext2D.ellipse()`**\nmethod of the Canvas 2D API adds an elliptical arc to the current sub-path.", + "fill": "\n\nThe\n**`CanvasRenderingContext2D.fill()`**\nmethod of the Canvas 2D API fills the current or given path with the current\n[CanvasRenderingContext2D.fillStyle].", + "fillrect": "\n\nThe\n**`CanvasRenderingContext2D.fillRect()`**\nmethod of the Canvas 2D API draws a rectangle that is filled according to the current\n[CanvasRenderingContext2D.fillStyle].\n\nThis method draws directly to the canvas without modifying the current path, so any\nsubsequent [CanvasRenderingContext2D.fill] or\n[CanvasRenderingContext2D.stroke] calls will have no effect\non it.", + "fillstyle": "\n\nThe\n**`CanvasRenderingContext2D.fillStyle`**\nproperty of the [Canvas 2D API](/en-US/docs/Web/API/Canvas_API) specifies the\ncolor, gradient, or pattern to use inside shapes. The default style is `#000`\n(black).\n\n> **Note:** For more examples of fill and stroke styles, see [Applying styles and color](/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors) in the [Canvas tutorial](/en-US/docs/Web/API/Canvas_API/Tutorial).", + "filltext": "\n\nThe [CanvasRenderingContext2D] method\n**`fillText()`**, part of the Canvas 2D API, draws a text string\nat the specified coordinates, filling the string's characters with the current\n[CanvasRenderingContext2D.fillStyle]. An optional parameter\nallows specifying a maximum width for the rendered text, which the will achieve by condensing the text or by using a lower font size.\n\nThis method draws directly to the canvas without modifying the current path, so any\nsubsequent [CanvasRenderingContext2D.fill] or\n[CanvasRenderingContext2D.stroke] calls will have no effect\non it.\n\nThe text is rendered using the font and text layout configuration as defined by the\n[CanvasRenderingContext2D.font],\n[CanvasRenderingContext2D.textAlign],\n[CanvasRenderingContext2D.textBaseline], and\n[CanvasRenderingContext2D.direction] properties.\n\n> **Note:** To draw the outlines of the characters in a string, call the context's\n> [CanvasRenderingContext2D.strokeText] method.", + "filter": "\n\nThe\n**`CanvasRenderingContext2D.filter`**\nproperty of the Canvas 2D API provides filter effects such as blurring and grayscaling.\nIt is similar to the CSS `filter` property and accepts the same values.", + "font": "\n\nThe **`CanvasRenderingContext2D.font`** property of the Canvas 2D API specifies the current text style to use when drawing text.\nThis string uses the same syntax as the [CSS font](/en-US/docs/Web/CSS/font) specifier.", + "fontkerning": "\n\nThe **`CanvasRenderingContext2D.fontKerning`** property of the [Canvas API](/en-US/docs/Web/API/Canvas_API) specifies how font kerning information is used.\n\nKerning adjusts how adjacent letters are spaced in a proportional font, allowing them to edge into each other's visual area if there is space available.\nFor example, in well-kerned fonts, the characters `AV`, `Ta` and `We` nest together and make character spacing more uniform and pleasant to read than the equivalent text without kerning.\n\nThe property corresponds to the [`font-kerning`](/en-US/docs/Web/CSS/font-kerning) CSS property.", + "fontstretch": "\n\nThe **`CanvasRenderingContext2D.fontStretch`** property of the [Canvas API](/en-US/docs/Web/API/Canvas_API) specifies how the font may be expanded or condensed when drawing text.\n\nThe property corresponds to the [`font-stretch`](/en-US/docs/Web/CSS/font-stretch) CSS property when used with keywords (percentage values are not supported).", + "fontvariantcaps": "\n\nThe **`CanvasRenderingContext2D.fontVariantCaps`** property of the [Canvas API](/en-US/docs/Web/API/Canvas_API) specifies an alternative capitalization of the rendered text.\n\nThis corresponds to the CSS [`font-variant-caps`](/en-US/docs/Web/CSS/font-variant-caps) property.", + "getcontextattributes": "\n\nThe **`CanvasRenderingContext2D.getContextAttributes()`** method returns an object that contains attributes used by the context.\n\nNote that context attributes may be requested when creating the context with [`HTMLCanvasElement.getContext()`](/en-US/docs/Web/API/HTMLCanvasElement/getContext), but the attributes that are actually supported and used may differ.", + "getimagedata": "\n\nThe [CanvasRenderingContext2D] method\n**`getImageData()`** of the Canvas 2D API returns an\n[ImageData] object representing the underlying pixel data for a specified\nportion of the canvas.\n\nThis method is not affected by the canvas's transformation matrix. If the specified\nrectangle extends outside the bounds of the canvas, the pixels outside the canvas are\ntransparent black in the returned `ImageData` object.\n\n> **Note:** Image data can be painted onto a canvas using the\n> [CanvasRenderingContext2D.putImageData] method.\n\nYou can find more information about `getImageData()` and general\nmanipulation of canvas contents in [Pixel manipulation with canvas](/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas).", + "getlinedash": "\n\nThe **`getLineDash()`** method of the Canvas 2D API's\n[CanvasRenderingContext2D] interface gets the current line dash pattern.", + "gettransform": "\n\nThe **`CanvasRenderingContext2D.getTransform()`** method of the Canvas 2D API retrieves the current transformation matrix being applied to the context.", + "globalalpha": "\n\nThe\n**`CanvasRenderingContext2D.globalAlpha`**\nproperty of the Canvas 2D API specifies the alpha (transparency) value that is applied\nto shapes and images before they are drawn onto the canvas.\n\n> **Note:** See also the chapter [Applying styles and color](/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors) in the [Canvas Tutorial](/en-US/docs/Web/API/Canvas_API/Tutorial).", + "globalcompositeoperation": "\n\nThe\n**`CanvasRenderingContext2D.globalCompositeOperation`**\nproperty of the Canvas 2D API sets the type of compositing operation to apply when\ndrawing new shapes.\n\nSee also [Compositing and clipping](/en-US/docs/Web/API/Canvas_API/Tutorial/Compositing) in the [Canvas Tutorial](/en-US/docs/Web/API/Canvas_API/Tutorial).", + "imagesmoothingenabled": "\n\nThe **`imageSmoothingEnabled`** property of the\n[CanvasRenderingContext2D] interface, part of the [Canvas API](/en-US/docs/Web/API/Canvas_API), determines whether scaled images\nare smoothed (`true`, default) or not (`false`). On getting the\n`imageSmoothingEnabled` property, the last value it was set to is returned.\n\nThis property is useful for games and other apps that use pixel art. When enlarging\nimages, the default resizing algorithm will blur the pixels. Set this property to\n`false` to retain the pixels' sharpness.\n\n> **Note:** You can adjust the smoothing quality with the\n> [CanvasRenderingContext2D.imageSmoothingQuality]\n> property.", + "imagesmoothingquality": "\n\nThe **`imageSmoothingQuality`** property of the\n[CanvasRenderingContext2D] interface, part of the [Canvas API](/en-US/docs/Web/API/Canvas_API), lets you set the quality of\nimage smoothing.\n\n> **Note:** For this property to have an effect,\n> [CanvasRenderingContext2D.imageSmoothingEnabled]\n> must be `true`.", + "iscontextlost": "\n\nThe **`CanvasRenderingContext2D.isContextLost()`** method of the Canvas 2D API returns `true` if the rendering context is lost (and has not yet been reset).\nThis might occur due to driver crashes, running out of memory, and so on.\n\nIf the user agent detects that the canvas backing storage is lost it will fire the [`contextlost` event](/en-US/docs/Web/API/HTMLCanvasElement/contextlost_event) at the associated [`HTMLCanvasElement`](/en-US/docs/Web/API/HTMLCanvasElement).\nIf this event is not cancelled it will attempt to reset the backing storage to the default state (this is equivalent to calling [CanvasRenderingContext2D.reset]).\nOn success it will fire the [`contextrestored` event](/en-US/docs/Web/API/HTMLCanvasElement/contextrestored_event), indicating that the context is ready to reinitialize and redraw.", + "ispointinpath": "\n\nThe\n**`CanvasRenderingContext2D.isPointInPath()`**\nmethod of the Canvas 2D API reports whether or not the specified point is contained in\nthe current path.", + "ispointinstroke": "\n\nThe\n**`CanvasRenderingContext2D.isPointInStroke()`**\nmethod of the Canvas 2D API reports whether or not the specified point is inside the\narea contained by the stroking of a path.", + "letterspacing": "\n\nThe **`CanvasRenderingContext2D.letterSpacing`** property of the [Canvas API](/en-US/docs/Web/API/Canvas_API) specifies the spacing between letters when drawing text.\n\nThis corresponds to the CSS [`letter-spacing`](/en-US/docs/Web/CSS/letter-spacing) property.", + "linecap": "\n\nThe\n**`CanvasRenderingContext2D.lineCap`**\nproperty of the Canvas 2D API determines the shape used to draw the end points of lines.\n\n> **Note:** Lines can be drawn with the\n> [CanvasRenderingContext2D.stroke], [CanvasRenderingContext2D.strokeRect],\n> and [CanvasRenderingContext2D.strokeText] methods.", + "linedashoffset": "\n\nThe\n**`CanvasRenderingContext2D.lineDashOffset`**\nproperty of the Canvas 2D API sets the line dash offset, or \"phase.\"\n\n> **Note:** Lines are drawn by calling the\n> [CanvasRenderingContext2D.stroke] method.", + "linejoin": "\n\nThe\n**`CanvasRenderingContext2D.lineJoin`**\nproperty of the Canvas 2D API determines the shape used to join two line segments where\nthey meet.\n\nThis property has no effect wherever two connected segments have the same direction,\nbecause no joining area will be added in this case. Degenerate segments with a length of\nzero (i.e., with all endpoints and control points at the exact same position) are also\nignored.\n\n> **Note:** Lines can be drawn with the\n> [CanvasRenderingContext2D.stroke],\n> [CanvasRenderingContext2D.strokeRect],\n> and [CanvasRenderingContext2D.strokeText] methods.", + "lineto": "\n\nThe [CanvasRenderingContext2D] method\n**`lineTo()`**, part of the Canvas 2D API, adds a straight line\nto the current sub-path by connecting the sub-path's last point to the specified\n`(x, y)` coordinates.\n\nLike other methods that modify the current path, this method does not directly render\nanything. To draw the path onto a canvas, you can use the\n[CanvasRenderingContext2D.fill] or\n[CanvasRenderingContext2D.stroke] methods.", + "linewidth": "\n\nThe\n**`CanvasRenderingContext2D.lineWidth`**\nproperty of the Canvas 2D API sets the thickness of lines.\n\n> **Note:** Lines can be drawn with the\n> [CanvasRenderingContext2D.stroke],\n> [CanvasRenderingContext2D.strokeRect],\n> and [CanvasRenderingContext2D.strokeText] methods.", + "measuretext": "\n\nThe\n`CanvasRenderingContext2D.measureText()`\nmethod returns a [TextMetrics] object that contains information about the\nmeasured text (such as its width, for example).", + "miterlimit": "\n\nThe **`CanvasRenderingContext2D.miterLimit`** property of the\nCanvas 2D API sets the miter limit ratio.\n\n> **Note:** For more info about miters, see [Applying styles and color](/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors) in the [Canvas tutorial](/en-US/docs/Web/API/Canvas_API/Tutorial).", + "moveto": "\n\nThe\n**`CanvasRenderingContext2D.moveTo()`**\nmethod of the Canvas 2D API begins a new sub-path at the point specified by the given\n`(x, y)` coordinates.", + "putimagedata": "\n\nThe **`CanvasRenderingContext2D.putImageData()`**\nmethod of the Canvas 2D API paints data from the given [ImageData] object\nonto the canvas. If a dirty rectangle is provided, only the pixels from that rectangle\nare painted. This method is not affected by the canvas transformation matrix.\n\n> **Note:** Image data can be retrieved from a canvas using the\n> [CanvasRenderingContext2D.getImageData] method.\n\nYou can find more information about `putImageData()` and general\nmanipulation of canvas contents in the article [Pixel manipulation with canvas](/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas).", + "quadraticcurveto": "\n\nThe\n**`CanvasRenderingContext2D.quadraticCurveTo()`**\nmethod of the Canvas 2D API adds a quadratic [Bézier curve](/en-US/docs/Glossary/Bezier_curve) to the current\nsub-path. It requires two points: the first one is a control point and the second one is\nthe end point. The starting point is the latest point in the current path, which can be\nchanged using [CanvasRenderingContext2D.moveTo] before creating\nthe quadratic Bézier curve.", + "rect": "\n\nThe\n**`CanvasRenderingContext2D.rect()`**\nmethod of the Canvas 2D API adds a rectangle to the current path.\n\nLike other methods that modify the current path, this method does not directly render\nanything. To draw the rectangle onto a canvas, you can use the\n[CanvasRenderingContext2D.fill] or\n[CanvasRenderingContext2D.stroke] methods.\n\n> **Note:** To both create and render a rectangle in one step, use the\n> [CanvasRenderingContext2D.fillRect] or\n> [CanvasRenderingContext2D.strokeRect] methods.", + "reset": "\n\nThe **`CanvasRenderingContext2D.reset()`** method of the Canvas 2D API resets the rendering context to its default state, allowing it to be reused for drawing something else without having to explicitly reset all the properties.\n\nResetting clears the backing buffer, drawing state stack, any defined paths, and styles.\nThis includes the current [transformation](/en-US/docs/Web/API/CanvasRenderingContext2D#transformations) matrix, [compositing](/en-US/docs/Web/API/CanvasRenderingContext2D#compositing) properties, clipping region, dash list, [line styles](/en-US/docs/Web/API/CanvasRenderingContext2D#line_styles), [text styles](/en-US/docs/Web/API/CanvasRenderingContext2D#text_styles), [shadows](/en-US/docs/Web/API/CanvasRenderingContext2D#shadows), [image smoothing](/en-US/docs/Web/API/CanvasRenderingContext2D#image_smoothing), [filters](/en-US/docs/Web/API/CanvasRenderingContext2D#filters), and so on.", + "resettransform": "\n\nThe\n**`CanvasRenderingContext2D.resetTransform()`**\nmethod of the Canvas 2D API resets the current transform to the identity matrix.", + "restore": "\n\nThe\n**`CanvasRenderingContext2D.restore()`**\nmethod of the Canvas 2D API restores the most recently saved canvas state by popping the\ntop entry in the drawing state stack. If there is no saved state, this method does\nnothing.\n\nFor more information about the [drawing state](/en-US/docs/Web/API/CanvasRenderingContext2D/save#drawing_state), see [CanvasRenderingContext2D.save].", + "rotate": "\n\nThe\n**`CanvasRenderingContext2D.rotate()`**\nmethod of the Canvas 2D API adds a rotation to the transformation matrix.", + "roundrect": "\n\nThe **`CanvasRenderingContext2D.roundRect()`** method of the Canvas 2D API adds a rounded rectangle to the current path.\n\nThe radii of the corners can be specified in much the same way as the CSS [`border-radius`](/en-US/docs/Web/CSS/border-radius) property.\n\nLike other methods that modify the current path, this method does not directly render anything.\nTo draw the rounded rectangle onto a canvas, you can use the [CanvasRenderingContext2D.fill] or [CanvasRenderingContext2D.stroke] methods.", + "save": "\n\nThe\n**`CanvasRenderingContext2D.save()`**\nmethod of the Canvas 2D API saves the entire state of the canvas by pushing the current\nstate onto a stack.\n\n### The drawing state\n\nThe drawing state that gets saved onto a stack consists of:\n\n- The current transformation matrix.\n- The current clipping region.\n- The current dash list.\n- The current values of the following attributes:\n [CanvasRenderingContext2D.strokeStyle],\n [CanvasRenderingContext2D.fillStyle],\n [CanvasRenderingContext2D.globalAlpha],\n [CanvasRenderingContext2D.lineWidth],\n [CanvasRenderingContext2D.lineCap],\n [CanvasRenderingContext2D.lineJoin],\n [CanvasRenderingContext2D.miterLimit],\n [CanvasRenderingContext2D.lineDashOffset],\n [CanvasRenderingContext2D.shadowOffsetX],\n [CanvasRenderingContext2D.shadowOffsetY],\n [CanvasRenderingContext2D.shadowBlur],\n [CanvasRenderingContext2D.shadowColor],\n [CanvasRenderingContext2D.globalCompositeOperation], [CanvasRenderingContext2D.font],\n [CanvasRenderingContext2D.textAlign],\n [CanvasRenderingContext2D.textBaseline],\n [CanvasRenderingContext2D.direction],\n [CanvasRenderingContext2D.imageSmoothingEnabled].", + "scale": "\n\nThe\n**`CanvasRenderingContext2D.scale()`**\nmethod of the Canvas 2D API adds a scaling transformation to the canvas units\nhorizontally and/or vertically.\n\nBy default, one unit on the canvas is exactly one pixel. A scaling transformation\nmodifies this behavior. For instance, a scaling factor of 0.5 results in a unit size of\n0.5 pixels; shapes are thus drawn at half the normal size. Similarly, a scaling factor\nof 2.0 increases the unit size so that one unit becomes two pixels; shapes are thus\ndrawn at twice the normal size.", + "scrollpathintoview": " \n\nThe\n**`CanvasRenderingContext2D.scrollPathIntoView()`**\nmethod of the Canvas 2D API scrolls the current or given path into view. It is similar\nto [Element.scrollIntoView].", + "setlinedash": "\n\nThe **`setLineDash()`** method of the Canvas 2D API's\n[CanvasRenderingContext2D] interface sets the line dash pattern used when\nstroking lines. It uses an array of values that specify alternating lengths of lines\nand gaps which describe the pattern.\n\n> **Note:** To return to using solid lines, set the line dash list to an\n> empty array.", + "settransform": "\n\nThe\n**`CanvasRenderingContext2D.setTransform()`**\nmethod of the Canvas 2D API resets (overrides) the current transformation to the\nidentity matrix, and then invokes a transformation described by the arguments of this\nmethod. This lets you scale, rotate, translate (move), and skew the context.\n\n> **Note:** See also the [CanvasRenderingContext2D.transform] method; instead of overriding the current transform matrix, it\n> multiplies it with a given one.", + "shadowblur": "\n\nThe\n**`CanvasRenderingContext2D.shadowBlur`**\nproperty of the Canvas 2D API specifies the amount of blur applied to shadows. The\ndefault is `0` (no blur).\n\n> **Note:** Shadows are only drawn if the\n> [CanvasRenderingContext2D.shadowColor] property is set to\n> a non-transparent value. One of the `shadowBlur`,\n> [CanvasRenderingContext2D.shadowOffsetX], or\n> [CanvasRenderingContext2D.shadowOffsetY] properties must\n> be non-zero, as well.", + "shadowcolor": "\n\nThe\n**`CanvasRenderingContext2D.shadowColor`**\nproperty of the Canvas 2D API specifies the color of shadows.\n\nBe aware that the shadow's rendered opacity will be affected by the opacity of the\n[CanvasRenderingContext2D.fillStyle] color when filling, and\nof the [CanvasRenderingContext2D.strokeStyle] color when\nstroking.\n\n> **Note:** Shadows are only drawn if the `shadowColor`\n> property is set to a non-transparent value. One of the\n> [CanvasRenderingContext2D.shadowBlur],\n> [CanvasRenderingContext2D.shadowOffsetX], or\n> [CanvasRenderingContext2D.shadowOffsetY] properties must\n> be non-zero, as well.", + "shadowoffsetx": "\n\nThe\n**`CanvasRenderingContext2D.shadowOffsetX`**\nproperty of the Canvas 2D API specifies the distance that shadows will be offset\nhorizontally.\n\n> **Note:** Shadows are only drawn if the\n> [CanvasRenderingContext2D.shadowColor] property is set to\n> a non-transparent value. One of the [CanvasRenderingContext2D.shadowBlur], `shadowOffsetX`, or\n> [CanvasRenderingContext2D.shadowOffsetY] properties must\n> be non-zero, as well.", + "shadowoffsety": "\n\nThe\n**`CanvasRenderingContext2D.shadowOffsetY`**\nproperty of the Canvas 2D API specifies the distance that shadows will be offset\nvertically.\n\n> **Note:** Shadows are only drawn if the\n> [CanvasRenderingContext2D.shadowColor] property is set to\n> a non-transparent value. One of the [CanvasRenderingContext2D.shadowBlur],\n> [CanvasRenderingContext2D.shadowOffsetX], or `shadowOffsetY` properties must be non-zero, as\n> well.", + "stroke": "\n\nThe\n**`CanvasRenderingContext2D.stroke()`**\nmethod of the Canvas 2D API strokes (outlines) the current or given path with the\ncurrent stroke style.\n\nStrokes are aligned to the center of a path; in other words, half of the stroke is\ndrawn on the inner side, and half on the outer side.\n\nThe stroke is drawn using the [non-zero winding rule](https://en.wikipedia.org/wiki/Nonzero-rule), which\nmeans that path intersections will still get filled.", + "strokerect": "\n\nThe\n**`CanvasRenderingContext2D.strokeRect()`**\nmethod of the Canvas 2D API draws a rectangle that is stroked (outlined) according to\nthe current [CanvasRenderingContext2D.strokeStyle] and other\ncontext settings.\n\nThis method draws directly to the canvas without modifying the current path, so any\nsubsequent [CanvasRenderingContext2D.fill] or\n[CanvasRenderingContext2D.stroke] calls will have no effect\non it.", + "strokestyle": "\n\nThe **`CanvasRenderingContext2D.strokeStyle`** property of the\nCanvas 2D API specifies the color, gradient, or pattern to use for the strokes\n(outlines) around shapes. The default is `#000` (black).\n\n> **Note:** For more examples of stroke and fill styles, see [Applying styles and color](/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors) in the [Canvas tutorial](/en-US/docs/Web/API/Canvas_API/Tutorial).", + "stroketext": "\n\nThe [CanvasRenderingContext2D] method\n**`strokeText()`**, part of the Canvas 2D API, strokes — that\nis, draws the outlines of — the characters of a text string at the specified\ncoordinates. An optional parameter allows specifying a maximum width for the rendered\ntext, which the will achieve by condensing the text or by\nusing a lower font size.\n\nThis method draws directly to the canvas without modifying the current path, so any\nsubsequent [CanvasRenderingContext2D.fill] or\n[CanvasRenderingContext2D.stroke] calls will have no effect\non it.\n\n> **Note:** Use the [CanvasRenderingContext2D.fillText] method to\n> fill the text characters rather than having just their outlines drawn.", + "textalign": "\n\nThe\n**`CanvasRenderingContext2D.textAlign`**\nproperty of the Canvas 2D API specifies the current text alignment used when drawing\ntext.\n\nThe alignment is relative to the `x` value of the\n[CanvasRenderingContext2D.fillText] method. For example, if\n`textAlign` is `\"center\"`, then the text's left edge will be at\n`x - (textWidth / 2)`.", + "textbaseline": "\n\nThe\n**`CanvasRenderingContext2D.textBaseline`**\nproperty of the Canvas 2D API specifies the current text baseline used when drawing\ntext.", + "textrendering": "\n\nThe **`CanvasRenderingContext2D.textRendering`** property of the [Canvas API](/en-US/docs/Web/API/Canvas_API) provides information to the rendering engine about what to optimize for when rendering text.\n\nThe values correspond to the SVG [`text-rendering`](/en-US/docs/Web/SVG/Attribute/text-rendering) attribute (and CSS [`text-rendering`](/en-US/docs/Web/CSS/text-rendering) property).", + "transform": "\n\nThe\n**`CanvasRenderingContext2D.transform()`**\nmethod of the Canvas 2D API multiplies the current transformation with the matrix\ndescribed by the arguments of this method. This lets you scale, rotate, translate\n(move), and skew the context.\n\n> **Note:** See also the\n> [CanvasRenderingContext2D.setTransform] method, which\n> resets the current transform to the identity matrix and then invokes\n> `transform()`.", + "translate": "\n\nThe\n**`CanvasRenderingContext2D.translate()`**\nmethod of the Canvas 2D API adds a translation transformation to the current matrix.", + "wordspacing": "\n\nThe **`CanvasRenderingContext2D.wordSpacing`** property of the [Canvas API](/en-US/docs/Web/API/Canvas_API) specifies the spacing between words when drawing text.\n\nThis corresponds to the CSS [`word-spacing`](/en-US/docs/Web/CSS/word-spacing) property." + } + }, + "capturecontroller": { + "docs": "\n\nThe **`CaptureController`** interface provides methods that can be used to further manipulate a capture session separate from its initiation via [MediaDevices.getDisplayMedia].\n\nA `CaptureController` object is associated with a capture session by passing it into a [MediaDevices.getDisplayMedia] call as the value of the options object's `controller` property.", + "properties": { + "setfocusbehavior": "\n\nThe [CaptureController] interface's **`setFocusBehavior()`** method controls whether the captured tab or window will be focused when an associated [MediaDevices.getDisplayMedia] `Promise` fulfills, or whether the focus will remain with the tab containing the capturing app.\n\nYou can set this behavior multiple times before the [MediaDevices.getDisplayMedia] call, or once immediately after its `Promise` resolves. After that, the focus behavior is said to be finalized, and can't be changed." + } + }, + "caretposition": { + "docs": " \n\nThe `CaretPosition` interface represents the caret position, an indicator for the text insertion point. You can get a `CaretPosition` using the [Document.caretPositionFromPoint] method." + }, + "cdatasection": { + "docs": "\n\nThe **`CDATASection`** interface represents a CDATA section\nthat can be used within XML to include extended portions of unescaped text.\nWhen inside a CDATA section, the symbols `<` and `&` don't need escaping\nas they normally do.\n\nIn XML, a CDATA section looks like:\n\n```xml\n\n```\n\nFor example:\n\n```html\n\n Here is a CDATA section: & ]]> with all kinds of unescaped text.\n\n```\n\nThe only sequence which is not allowed within a CDATA section is the closing sequence\nof a CDATA section itself, `]]>`.\n\n> **Note:** CDATA sections should not be used within HTML they are considered as comments and not displayed.\n\n" + }, + "channelmergernode": { + "docs": "\n\nThe `ChannelMergerNode` interface, often used in conjunction with its opposite, [ChannelSplitterNode], reunites different mono inputs into a single output. Each input is used to fill a channel of the output. This is useful for accessing each channels separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.\n\n![Default channel merger node with six mono inputs combining to form a single output.](webaudiomerger.png)\n\nIf `ChannelMergerNode` has one single output, but as many inputs as there are channels to merge; the number of inputs is defined as a parameter of its constructor and the call to [BaseAudioContext/createChannelMerger]. In the case that no value is given, it will default to `6`.\n\nUsing a `ChannelMergerNode`, it is possible to create outputs with more channels than the rendering hardware is able to process. In that case, when the signal is sent to the [BaseAudioContext/listener] object, supernumerary channels will be ignored.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputsvariable; default to 6.
Number of outputs1
Channel count mode\"explicit\"
Channel count2 (not used in the default count mode)
Channel interpretation\"speakers\"
" + }, + "channelsplitternode": { + "docs": "\n\nThe `ChannelSplitterNode` interface, often used in conjunction with its opposite, [ChannelMergerNode], separates the different channels of an audio source into a set of mono outputs. This is useful for accessing each channel separately, e.g. for performing channel mixing where gain must be separately controlled on each channel.\n\n![Default channel splitter node with a single input splitting to form 6 mono outputs.](webaudiosplitter.png)\n\nIf your `ChannelSplitterNode` always has one single input, the amount of outputs is defined by a parameter on its constructor and the call to [BaseAudioContext/createChannelSplitter]. In the case that no value is given, it will default to `6`. If there are fewer channels in the input than there are outputs, supernumerary outputs are silent.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputs1
Number of outputsvariable; default to 6.
Channel count mode\n \"explicit\" Older implementations, as per earlier versions\n of the spec use \"max\".\n
Channel count\n Fixed to the number of outputs. Older implementations, as per earlier\n versions of the spec use 2 (not used in the default count\n mode).\n
Channel interpretation\"discrete\"
" + }, + "characterdata": { + "docs": "\n\nThe **`CharacterData`** abstract interface represents a [Node] object that contains characters. This is an abstract interface, meaning there aren't any objects of type `CharacterData`: it is implemented by other interfaces like [Text], [Comment], [CDATASection], or [ProcessingInstruction], which aren't abstract.\n\n", + "properties": { + "after": "\n\nThe **`after()`** method of the [CharacterData] interface\ninserts a set of [Node] objects or strings in the children list of the\nobject's parent, just after the object itself.\n\nStrings are inserted as [Text] nodes; the string is being passed as argument to the [Text/Text] constructor.", + "appenddata": "\n\nThe **`appendData()`** method of the [CharacterData] interface\nadds the provided data to the end of the node's current data.", + "before": "\n\nThe **`before()`** method of the [CharacterData] interface\ninserts a set of [Node] objects and strings\nin the children list of the `CharacterData`'s parent, just before the `CharacterData` node.\n\nStrings are inserted as [Text] nodes; the string is being passed as argument to the [Text/Text] constructor.", + "data": "\n\nThe **`data`** property of the [CharacterData] interface represent the value of the current object's data.", + "deletedata": "\n\nThe **`deleteData()`** method of the [CharacterData] interface\nremoves all or part of the data from this `CharacterData` node.", + "insertdata": "\n\nThe **`insertData()`** method of the [CharacterData] interface\ninserts the provided data into this `CharacterData` node's current data,\nat the provided offset from the start of the existing data.\nThe provided data is spliced into the existing data.", + "length": "\n\nThe read-only **`CharacterData.length`** property\nreturns the number of characters in the contained data, as a positive integer.", + "nextelementsibling": "\n\nThe read-only **`nextElementSibling`** property of the [CharacterData] interface\nreturns the first [Element] node following the specified one in its parent's\nchildren list, or `null` if the specified element is the last one in the list.", + "previouselementsibling": "\n\nThe read-only **`previousElementSibling`** of the [CharacterData] interface\nreturns the first [Element] before the current node in its parent's children list,\nor `null` if there is none.", + "remove": "\n\nThe **`remove()`** method of the [CharacterData] removes the text contained in the node.", + "replacedata": "\n\nThe **`replaceData()`** method of the [CharacterData] interface removes a certain number of characters of the existing text in a given `CharacterData` node and replaces those characters with the text provided.", + "replacewith": "\n\nThe **`replaceWith()`** method of the [CharacterData] interface\nreplaces this node in the children list of its parent\nwith a set of [Node] objects or string.\n\nStrings are inserted as [Text] nodes; the string is being passed as argument to the [Text/Text] constructor.", + "substringdata": "\n\nThe **`substringData()`** method of the [CharacterData] interface\nreturns a portion of the existing data,\nstarting at the specified index\nand extending for a given number of characters afterwards." + } + }, + "client": { + "docs": "\n\nThe `Client` interface represents an executable context such as a [Worker], or a [SharedWorker]. [Window] clients are represented by the more-specific [WindowClient]. You can get `Client`/`WindowClient` objects from methods such as [Clients.matchAll] and [Clients.get].", + "properties": { + "frametype": "\n\nThe **`frameType`** read-only property of the [Client] interface indicates the type of browsing context of the current [Client]. This value can be one of `\"auxiliary\"`, `\"top-level\"`, `\"nested\"`, or `\"none\"`.", + "id": "\n\nThe **`id`** read-only property of the [Client] interface returns the universally unique identifier of the [Client] object.", + "postmessage": "\n\nThe **`postMessage()`** method of the\n[Client] interface allows a service worker to send a message to a client\n(a [Window], [Worker], or [SharedWorker]). The\nmessage is received in the \"`message`\" event on\n[ServiceWorkerContainer].", + "type": "\n\nThe **`type`** read-only property of the [Client]\ninterface indicates the type of client the service worker is controlling.", + "url": "\n\nThe **`url`** read-only property of the [Client]\ninterface returns the URL of the current service worker client." + } + }, + "clients": { + "docs": "\n\nThe `Clients` interface provides access to [Client] objects. Access it via `[ServiceWorkerGlobalScope].clients` within a [service worker](/en-US/docs/Web/API/Service_Worker_API).", + "properties": { + "claim": "\n\nThe **`claim()`** method of the [Clients] interface allows an active service worker to set itself as the [ServiceWorkerContainer.controller] for all clients within its [ServiceWorkerRegistration.scope].\nThis triggers a \"`controllerchange`\" event on [ServiceWorkerContainer] in any clients that become controlled by this service worker.\n\nWhen a service worker is initially registered, pages won't use it until they next\nload. The `claim()` method causes those pages to be controlled immediately.\nBe aware that this results in your service worker controlling pages that loaded\nregularly over the network, or possibly via a different service worker.", + "get": "\n\nThe **`get()`** method of the\n[Clients] interface gets a service worker client matching a given\n`id` and returns it in a `Promise`.", + "matchall": "\n\nThe **`matchAll()`** method of the [Clients]\ninterface returns a `Promise` for a list of service worker\n[Client] objects. Include the `options` parameter to return all service worker\nclients whose origin is the same as the associated service worker's origin. If options\nare not included, the method returns only the service worker clients controlled by the\nservice worker.", + "openwindow": "\n\nThe **`openWindow()`** method of the [Clients]\ninterface creates a new top level browsing context and loads a given URL. If the calling\nscript doesn't have permission to show popups, `openWindow()` will throw an\n`InvalidAccessError`.\n\nIn Firefox, the method is allowed to show popups only when called as the result of a\nnotification click event.\n\nIn Chrome for Android, the method may instead open the URL in an existing browsing\ncontext provided by a [standalone web app](/en-US/docs/Web/Progressive_web_apps) previously added to the user's home screen. As of recently, this also works on\nChrome for Windows." + } + }, + "clipboard": { + "docs": "\n\nThe **`Clipboard`** interface implements the [Clipboard API](/en-US/docs/Web/API/Clipboard_API), providing—if the user grants permission—both read and write access to the contents of the system clipboard. The Clipboard API can be used to implement cut, copy, and paste features within a web application.\n\nThe system clipboard is exposed through the global [Navigator.clipboard] property.\n\nCalls to the methods of the `Clipboard` object will not succeed if the user hasn't granted the needed permissions using the [Permissions API](/en-US/docs/Web/API/Permissions_API) and the `'clipboard-read'` or `'clipboard-write'` permission as appropriate.\n\n> **Note:** In reality, at this time browser requirements for access to the clipboard vary significantly. Please see the section [Clipboard availability](#clipboard_availability) for details.\n\nAll of the Clipboard API methods operate asynchronously; they return a `Promise` which is resolved once the clipboard access has been completed. The promise is rejected if clipboard access is denied.", + "properties": { + "read": "\n\nThe **`read()`** method of the\n[Clipboard] interface requests a copy of the clipboard's contents,\ndelivering the data to the returned `Promise` when the promise is\nresolved. Unlike [Clipboard.readText], the\n`read()` method can return arbitrary data, such as images. This method can\nalso return text.\n\n> **Note:** The asynchronous Clipboard and [Permissions APIs](/en-US/docs/Web/API/Permissions_API) are still in the\n> process of being integrated into most browsers, so they often deviate from the\n> official rules for permissions and the like. Be sure to review the [compatibility table](#browser_compatibility) before using these methods.", + "readtext": "\n\nThe **[Clipboard]** interface's\n**`readText()`** method returns a `Promise` which\nresolves with a copy of the textual contents of the system clipboard.", + "write": "\n\nThe [Clipboard] method\n**`write()`** writes arbitrary data, such as images, to the\nclipboard. This can be used to implement cut and copy functionality.\n\nThe `\"clipboard-write\"` permission of the [Permissions API](/en-US/docs/Web/API/Permissions_API), is granted\nautomatically to pages when they are in the active tab.\n\n> **Note:** Browser support for the asynchronous clipboard APIs is still\n> in the process of being implemented. Be sure to check the [compatibility table](#browser_compatibility) as well as\n> [Clipboard availability](/en-US/docs/Web/API/Clipboard#clipboard_availability) for more\n> information.\n\n> **Note:** For parity with Google Chrome, Firefox only allows this function to work with text, HTML, and PNG data.", + "writetext": "\n\nThe [Clipboard] interface's **`writeText()`**\nproperty writes the specified text string to the system clipboard. Text may be read back\nusing either [Clipboard.read] or [Clipboard.readText]." + } + }, + "clipboardevent": { + "docs": "\n\nThe **`ClipboardEvent`** interface represents events providing information related to modification of the clipboard, that is [Element/cut_event], [Element/copy_event], and [Element/paste_event] events.\n\n", + "properties": { + "clipboarddata": "\n\nThe **`ClipboardEvent.clipboardData`** property holds a [DataTransfer] object, which can be used:\n\n- to specify what data should be put into the clipboard from the [Element/cut_event] and\n [Element/copy_event] event handlers, typically with a [DataTransfer.setData] call;\n- to obtain the data to be pasted from the [Element/paste_event] event handler, typically\n with a [DataTransfer.getData] call.\n\nSee the [Element/cut_event], [Element/copy_event], and [Element/paste_event] events\ndocumentation for more information." + } + }, + "clipboarditem": { + "docs": "\n\nThe **`ClipboardItem`** interface of the [Clipboard API] represents a single item format, used when reading or writing data via the [Clipboard API]. That is [clipboard.read] and [clipboard.write] respectively.\n\nThe benefit of having the **`ClipboardItem`** interface to represent data, is that it enables developers to cope with the varying scope of file types and data easily.\n\nAccess to the contents of the clipboard is gated behind the [Permissions API](/en-US/docs/Web/API/Permissions_API): The `clipboard-write` permission is granted automatically to pages when they are in the active tab. The `clipboard-read` permission must be requested, which you can do by trying to read data from the clipboard.\n\n> **Note:** To work with text see the [Clipboard.readText] and [Clipboard.writeText] methods of the [Clipboard] interface.\n\n> **Note:** You can only pass in one clipboard item at a time.", + "properties": { + "gettype": "\n\nThe **`getType()`** method of the [ClipboardItem] interface returns a `Promise` that resolves with a [Blob] of the requested or an error if the MIME type is not found.", + "presentationstyle": "\n\nThe read-only\n**`presentationStyle`** property of the [ClipboardItem]\ninterface returns a string indicating how an item should be presented.", + "types": "\n\nThe read-only\n**`types`** property of the [ClipboardItem]\ninterface returns an `Array` of \navailable within the [ClipboardItem]" + } + }, + "closeevent": { + "docs": "\n\nA `CloseEvent` is sent to clients using when the connection is closed. This is delivered to the listener indicated by the `WebSocket` object's `onclose` attribute.\n\n", + "properties": { + "code": "\n\nThe **`code`** read-only property of the [CloseEvent] interface returns a [WebSocket connection close code](https://www.rfc-editor.org/rfc/rfc6455.html#section-7.1.5) indicating the reason the server gave for closing the connection.", + "reason": "\n\nThe **`reason`** read-only property of the [CloseEvent] interface returns the [WebSocket connection close reason](https://www.rfc-editor.org/rfc/rfc6455.html#section-7.1.6) the server gave for closing the connection; that is, a concise human-readable prose explanation for the closure.", + "wasclean": "\n\nThe **`wasClean`** read-only property of the [CloseEvent] interface returns `true` if the connection closed cleanly." + } + }, + "comment": { + "docs": "\n\nThe **`Comment`** interface represents textual notations within markup; although it is generally not visually shown, such comments are available to be read in the source view.\n\nComments are represented in HTML and XML as content between '``'. In XML, like inside SVG or MathML markup, the character sequence '`--`' cannot be used within a comment.\n\n" + }, + "compositionevent": { + "docs": "\n\nThe DOM **`CompositionEvent`** represents events that occur due to the user indirectly entering text.\n\n", + "properties": { + "data": "\n\nThe **`data`** read-only property of the\n[CompositionEvent] interface returns the characters generated by the input\nmethod that raised the event; its exact nature varies depending on the type of event\nthat generated the `CompositionEvent` object.", + "initcompositionevent": "\n\nThe **`initCompositionEvent()`**\nmethod of the [CompositionEvent] interface initializes the attributes of a\n`CompositionEvent` object instance.\n\n> **Note:** The correct way of creating a [CompositionEvent] is to use\n> the constructor [CompositionEvent.CompositionEvent].", + "locale": "\n\nThe **`locale`** read-only property of the\n[CompositionEvent] interface returns the locale of current input method\n(for example, the keyboard layout locale if the composition is associated with IME).\n\n> **Warning:** Even for browsers supporting it, don't trust the value contained in this property.\n> Even if technically it is accessible, the way to set it up when creating a [CompositionEvent]\n> is not guaranteed to be coherent." + } + }, + "compressionstream": { + "docs": "\n\nThe **`CompressionStream`** interface of the [Compression Streams API] is an API for compressing a stream of data.", + "properties": { + "readable": "\n\nThe **`readable`** read-only property of the [CompressionStream] interface returns a [ReadableStream].", + "writable": "\n\nThe **`writable`** read-only property of the [CompressionStream] interface returns a [WritableStream]." + } + }, + "console": { + "docs": "\n\nThe **`console`** object provides access to the debugging console (e.g., the [Web console](https://firefox-source-docs.mozilla.org/devtools-user/web_console/index.html) in Firefox). The specifics of how it works vary from browser to browser or server runtimes (Node.js, for example), but there is a _de facto_ set of features that are typically provided.\n\nThe `console` object can be accessed from any global object. [Window] on browsing scopes and [WorkerGlobalScope] as specific variants in workers via the property console. It's exposed as [Window.console], and can be referenced as `console`. For example:\n\n```js\nconsole.log(\"Failed to open the specified link\");\n```\n\nThis page documents the [Methods](#methods) available on the `console` object and gives a few [Usage](#usage) examples.\n\n> **Note:** Certain online IDEs and editors may implement the console API differently than the browsers. As a result, certain functionality of the console API, such as the timer methods, may not be outputted in the console of online IDEs or editors. Always open your browser's DevTools console to see the logs as shown in this documentation.", + "properties": { + "assert_static": "\n\nThe **`console.assert()`** static method writes an error message to the console if the assertion is false. If the assertion is true, nothing happens.\n\n", + "clear_static": "\n\nThe **`console.clear()`** static method clears the console if the console allows it. A graphical console, like those running on browsers, will allow it; a console displaying on the terminal, like the one running on Node, will not support it, and will have no effect (and no error).", + "count_static": "\n\nThe **`console.count()`** static method logs the number of times that this particular call to `count()` has been called.\n\n", + "countreset_static": "\n\nThe **`console.countReset()`** static method resets counter used with [console/count_static].\n\n", + "debug_static": "\n\nThe **`console.debug()`** static method outputs a message to the console at the \"debug\" log level. The message is only displayed to the user if the console is configured to display debug output. In most cases, the log level is configured within the console UI. This log level might correspond to the `Debug` or `Verbose` log level.\n\n", + "dir_static": "\n\nThe **`console.dir()`** static method displays an interactive list of the properties of the specified JavaScript object. The output is presented as a hierarchical listing with disclosure triangles that let you see the contents of child objects.\n\nIn other words, `console.dir()` is the way to see all the properties of a specified JavaScript object in console by which the developer can easily get the properties of the object.\n\n![A screenshot of the Firefox console where console.dir(document.location) is run. We can see the URL of the page, followed by a block of properties. If the property is a function or an object, a disclosure triangle is prepended.](console-dir.png)", + "dirxml_static": "\n\nThe **`console.dirxml()`** static method displays an interactive tree of the descendant elements of the specified XML/HTML element. If it is not possible to display as an element the JavaScript Object view is shown instead. The output is presented as a hierarchical listing of expandable nodes that let you see the contents of child nodes.", + "error_static": "\n\nThe **`console.error()`** static method outputs an error message to the console.\n\n", + "group_static": "\n\nThe **`console.group()`** static method creates a new inline group in the [Web console](https://firefox-source-docs.mozilla.org/devtools-user/web_console/index.html) log, causing any subsequent console messages to be indented by an additional level, until [console/groupend_static] is called.\n\n", + "groupcollapsed_static": "\n\nThe **`console.groupCollapsed()`** static method creates a new inline group in the console. Unlike [console/group_static], however, the new group is created collapsed. The user will need to use the disclosure button next to it to expand it, revealing the entries created in the group.\n\nCall [console/groupEnd_static] to back out to the parent group.\n\nSee [Using groups in the console](/en-US/docs/Web/API/console#using_groups_in_the_console) in the [console] documentation for details and examples.\n\n", + "groupend_static": "\n\nThe **`console.groupEnd()`** static method exits the current inline group in the console. See [Using groups in the console](/en-US/docs/Web/API/console#using_groups_in_the_console) in the [console] documentation for details and examples.\n\n", + "info_static": "\n\nThe **`console.info()`** static method outputs an informational message to the console. In Firefox, a small \"i\" icon is displayed next to these items in the console's log.\n\n", + "log_static": "\n\nThe **`console.log()`** static method outputs a message to the console. The message may be a single string (with optional substitution values), or it may be any one or more JavaScript objects.\n\n", + "profile_static": "\n\nThe **`console.profile()`** static method starts recording a performance profile (for example, the [Firefox performance tool](https://firefox-source-docs.mozilla.org/devtools-user/performance/index.html)).\n\nYou can optionally supply an argument to name the profile and this then enables you to stop only that profile if multiple profiles being recorded. See [console/profileEnd_static] to see how this argument is interpreted.\n\nTo stop recording call [console/profileEnd_static].\n\n", + "profileend_static": "\n\nThe **`console.profileEnd()`** static method stops recording a profile previously started with [console/profile_static].\n\nYou can optionally supply an argument to name the profile. Doing so enables you to stop only that profile if you have multiple profiles being recorded.\n\n- If `console.profileEnd()` is passed a profile name, and it matches the name of a profile being recorded, then that profile is stopped.\n- If `console.profileEnd()` is passed a profile name and it does not match the name of a profile being recorded, no changes will be made.\n- If `console.profileEnd()` is not passed a profile name, the most recently started profile is stopped.\n\n", + "table_static": "\n\nThe **`console.table()`** static method displays tabular data as a table.\n\nThis function takes one mandatory argument `data`, which must be an array or an object, and one additional optional parameter `columns`.\n\nIt logs `data` as a table. Each element in the array (or enumerable property if `data` is an object) will be a row in the table.\n\nThe first column in the table will be labeled `(index)`. If `data` is an array, then its values will be the array indices. If `data` is an object, then its values will be the property names. Note that (in Firefox) `console.table` is limited to displaying 1000 rows (first row is the labeled index).\n\n### Collections of primitive types\n\nThe `data` argument may be an array or an object.\n\n```js\n// an array of strings\n\nconsole.table([\"apples\", \"oranges\", \"bananas\"]);\n```\n\n| (index) | Values |\n| ------- | --------- |\n| 0 | 'apples' |\n| 1 | 'oranges' |\n| 2 | 'bananas' |\n\n```js\n// an object whose properties are strings\n\nfunction Person(firstName, lastName) {\n this.firstName = firstName;\n this.lastName = lastName;\n}\n\nconst me = new Person(\"Tyrone\", \"Jones\");\n\nconsole.table(me);\n```\n\n| (index) | Values |\n| --------- | -------- |\n| firstName | 'Tyrone' |\n| lastName | 'Jones' |\n\n### Collections of compound types\n\nIf the elements in the array, or properties in the object, are themselves arrays or objects, then their elements or properties are enumerated in the row, one per column:\n\n```js\n// an array of arrays\n\nconst people = [\n [\"Tyrone\", \"Jones\"],\n [\"Janet\", \"Smith\"],\n [\"Maria\", \"Cruz\"],\n];\nconsole.table(people);\n```\n\n| (index) | 0 | 1 |\n| ------- | -------- | ------- |\n| 0 | 'Tyrone' | 'Jones' |\n| 1 | 'Janet' | 'Smith' |\n| 2 | 'Maria' | 'Cruz' |\n\n```js\n// an array of objects\n\nfunction Person(firstName, lastName) {\n this.firstName = firstName;\n this.lastName = lastName;\n}\n\nconst tyrone = new Person(\"Tyrone\", \"Jones\");\nconst janet = new Person(\"Janet\", \"Smith\");\nconst maria = new Person(\"Maria\", \"Cruz\");\n\nconsole.table([tyrone, janet, maria]);\n```\n\nNote that if the array contains objects, then the columns are labeled with the property name.\n\n| (index) | firstName | lastName |\n| ------- | --------- | -------- |\n| 0 | 'Tyrone' | 'Jones' |\n| 1 | 'Janet' | 'Smith' |\n| 2 | 'Maria' | 'Cruz' |\n\n```js\n// an object whose properties are objects\n\nconst family = {};\n\nfamily.mother = new Person(\"Janet\", \"Jones\");\nfamily.father = new Person(\"Tyrone\", \"Jones\");\nfamily.daughter = new Person(\"Maria\", \"Jones\");\n\nconsole.table(family);\n```\n\n| (index) | firstName | lastName |\n| -------- | --------- | -------- |\n| daughter | 'Maria' | 'Jones' |\n| father | 'Tyrone' | 'Jones' |\n| mother | 'Janet' | 'Jones' |\n\n### Restricting the columns displayed\n\nBy default, `console.table()` lists all elements in each row. You can use the optional `columns` parameter to select a subset of columns to display:\n\n```js\n// an array of objects, logging only firstName\n\nfunction Person(firstName, lastName) {\n this.firstName = firstName;\n this.lastName = lastName;\n}\n\nconst tyrone = new Person(\"Tyrone\", \"Jones\");\nconst janet = new Person(\"Janet\", \"Smith\");\nconst maria = new Person(\"Maria\", \"Cruz\");\n\nconsole.table([tyrone, janet, maria], [\"firstName\"]);\n```\n\n| (index) | firstName |\n| ------- | --------- |\n| 0 | 'Tyrone' |\n| 1 | 'Janet' |\n| 2 | 'Maria' |\n\n### Sorting columns\n\nYou can sort the table by a particular column by clicking on that column's label.", + "time_static": "\n\nThe **`console.time()`** static method starts a timer you can use to track how long an operation takes. You give each timer a unique name, and may have up to 10,000 timers running on a given page. When you call [console/timeEnd_static] with the same name, the browser will output the time, in milliseconds, that elapsed since the timer was started.\n\nSee [Timers](/en-US/docs/Web/API/console#timers) in the [console] documentation for details and examples.\n\n", + "timeend_static": "\n\nThe **`console.timeEnd()`** static method stops a timer that was previously started by calling [console/time_static].\n\nSee [Timers](/en-US/docs/Web/API/console#timers) in the documentation for details and examples.\n\n", + "timelog_static": "\n\nThe **`console.timeLog()`** static method logs the current value of a timer that was previously started by calling [console/time_static].", + "timestamp_static": "\n\nThe **`console.timeStamp()`** static method adds a single marker to the browser's Performance tool ([Firefox](https://profiler.firefox.com/docs/#/), [Chrome](https://developer.chrome.com/docs/devtools/evaluate-performance/reference/)). This lets you correlate a point in your code with the other events recorded in the timeline, such as layout and paint events.\n\nYou can optionally supply an argument to label the timestamp, and this label will then be shown alongside the marker.\n\n", + "trace_static": "\n\nThe **`console.trace()`** static method outputs a stack trace to the console.\n\n> **Note:** In some browsers, `console.trace()` may also output the sequence of calls and asynchronous events leading to the current `console.trace()` which are not on the call stack — to help identify the origin of the current event evaluation loop.\n\nSee [Stack traces](/en-US/docs/Web/API/console#stack_traces) in the [console] documentation for details and examples.", + "warn_static": "\n\nThe **`console.warn()`** static method outputs a warning message to the console.\n\n> **Note:** In Chrome and Firefox, warnings have a small exclamation point icon next to them in the console log." + } + }, + "constantsourcenode": { + "docs": "\n\nThe `ConstantSourceNode` interface—part of the Web Audio API—represents an audio source (based upon [AudioScheduledSourceNode]) whose output is single unchanging value. This makes it useful for cases in which you need a constant value coming in from an audio source. In addition, it can be used like a constructible [AudioParam] by automating the value of its [ConstantSourceNode.offset] or by connecting another node to it; see [Controlling multiple parameters with ConstantSourceNode](/en-US/docs/Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode).\n\nA `ConstantSourceNode` has no inputs and exactly one monaural (one-channel) output. The output's value is always the same as the value of the [ConstantSourceNode.offset] parameter.\n\n\n \n \n \n \n \n \n \n \n \n \n
Number of inputs0
Number of outputs1
", + "properties": { + "offset": "\n\nThe read-only `offset` property of the [ConstantSourceNode]\ninterface returns a [AudioParam] object indicating the numeric [a-rate](/en-US/docs/Web/API/AudioParam#a-rate) value which is always returned\nby the source when asked for the next sample.\n\n> **Note:** While the `AudioParam` named `offset` is read-only, the\n> `value` property within is not. So you can change the value of\n> `offset` by setting the value of\n> `ConstantSourceNode.offset.value`:\n>\n> ```js\n> myConstantSourceNode.offset.value = newValue;\n> ```" + } + }, + "contactaddress": { + "docs": "\n\nThe **`ContactAddress`** interface of the [contact_picker_api] represents a physical address. Instances of this interface are retrieved from the `address` property of the objects returned by [ContactsManager.getProperties].\n\nIt may be useful to refer to the Universal Postal Union website's [Addressing S42 standard](https://www.upu.int/en/Postal-Solutions/Programmes-Services/Addressing-Solutions#addressing-s42-standard) materials, which provide information about international standards for postal addresses." + }, + "contactsmanager": { + "docs": "\n\nThe **`ContactsManager`** interface of the [Contact Picker API] allows users to select entries from their contact list and share limited details of the selected entries with a website or application.\n\nThe `ContactsManager` is available through the global [navigator.contacts] property.", + "properties": { + "getproperties": "\n\nThe **`getProperties()`** method of the\n[ContactsManager] interface returns a `Promise` which resolves\nwith an `Array` of `strings` indicating which contact\nproperties are available.", + "select": "\n\nThe **`select()`** method of the\n[ContactsManager] interface returns a `Promise` which, when\nresolved, presents the user with a contact picker which allows them to select contact(s)\nthey wish to share. This method requires a user gesture for the `Promise` to\nresolve." + } + }, + "contentindex": { + "docs": "\n\nThe **`ContentIndex`** interface of the [Content Index API](/en-US/docs/Web/API/Content_Index_API) allows developers to register their offline enabled content with the browser.", + "properties": { + "add": "\n\nThe **`add()`** method of the\n[ContentIndex] interface registers an item with the [content index](/en-US/docs/Web/API/Content_Index_API).", + "delete": "\n\nThe **`delete()`** method of the\n[ContentIndex] interface unregisters an item from the currently indexed\ncontent.\n\n> **Note:** Calling `delete()` only affects the index. It does not delete anything\n> from the [Cache].", + "getall": "\n\nThe **`getAll()`** method of the\n[ContentIndex] interface returns a `Promise` that resolves with\nan iterable list of content index entries." + } + }, + "contentindexevent": { + "docs": "\n\nThe **`ContentIndexEvent`** interface of the [content index](/en-US/docs/Web/API/Content_Index_API) defines the object used to represent the [ServiceWorkerGlobalScope.contentdelete_event] event.\n\nThis event is sent to the [global scope](/en-US/docs/Web/API/ServiceWorkerGlobalScope) of a [ServiceWorker]. It contains the id of the indexed content to be removed.\n\nThe [ServiceWorkerGlobalScope.contentdelete_event] event is only fired when the deletion happens due to interaction with the browser's built-in user interface. It is not fired when the [ContentIndex.delete] method is called.\n\n", + "properties": { + "id": "\n\nThe read-only **`id`** property of the\n[ContentIndexEvent] interface is a `String` which identifies\nthe deleted content index via its `id`." + } + }, + "contentvisibilityautostatechangeevent": { + "docs": "\n\nThe **`ContentVisibilityAutoStateChangeEvent`** interface is the event object for the [element/contentvisibilityautostatechange_event] event, which fires on any element with set on it when it starts or stops being [relevant to the user](/en-US/docs/Web/CSS/CSS_containment#relevant_to_the_user) and [skipping its contents](/en-US/docs/Web/CSS/CSS_containment#skips_its_contents).\n\nWhile the element is not relevant (between the start and end events), the user agent skips an element's rendering, including layout and painting.\nThis can significantly improve page rendering speed.\nThe [element/contentvisibilityautostatechange_event] event provides a way for an app's code to also start or stop rendering processes (e.g. drawing on a `canvas`) when they are not needed, thereby conserving processing power.\n\nNote that even when hidden from view, element contents will remain semantically relevant (e.g. to assistive technology users), so this signal should not be used to skip significant semantic DOM updates.\n\n", + "properties": { + "skipped": "\n\nThe `skipped` read-only property of the [ContentVisibilityAutoStateChangeEvent] interface returns `true` if the user agent [skips the element's contents](/en-US/docs/Web/CSS/CSS_containment#skips_its_contents), or `false` otherwise." + } + }, + "convolvernode": { + "docs": "\n\nThe `ConvolverNode` interface is an [AudioNode] that performs a Linear Convolution on a given [AudioBuffer], often used to achieve a reverb effect. A `ConvolverNode` always has exactly one input and one output.\n\n> **Note:** For more information on the theory behind Linear Convolution, see the [Convolution article on Wikipedia](https://en.wikipedia.org/wiki/Convolution).\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Number of inputs1
Number of outputs1
Channel count mode\"clamped-max\"
Channel count1, 2, or 4
Channel interpretation\"speakers\"
", + "properties": { + "buffer": "\n\nThe **`buffer`** property of the [ConvolverNode] interface represents a mono, stereo, or 4-channel [AudioBuffer] containing the (possibly multichannel) impulse response used by the `ConvolverNode` to create the reverb effect.\n\nThis is normally a simple recording of as-close-to-an-impulse as can be found in the space you want to model. For example, if you want to model the reverb in your bathroom, you might set up a microphone near the door to record the sound of a balloon pop or synthesized impulse from the sink. That audio recording could then be used as the buffer.\n\nThis audio buffer must have the same sample-rate as the `AudioContext` or an exception will be thrown. At the time when this attribute is set, the buffer and the state of the attribute will be used to configure the `ConvolverNode` with this impulse response having the given normalization. The initial value of this attribute is `null`.", + "normalize": "\n\nThe `normalize` property of the [ConvolverNode] interface\nis a boolean that controls whether the impulse response from the buffer will be\nscaled by an equal-power normalization when the `buffer` attribute is set,\nor not.\n\nIts default value is `true` in order to achieve a more uniform output\nlevel from the convolver, when loaded with diverse impulse responses. If normalize is\nset to `false`, then the convolution will be rendered with no\npre-processing/scaling of the impulse response. Changes to this value do not take\neffect until the next time the `buffer` attribute is set." + } + }, + "cookiechangeevent": { + "docs": "\n\nThe **`CookieChangeEvent`** interface of the [Cookie Store API] is the event type of the [CookieStore/change_event] event fired at a [CookieStore] when any cookie changes occur. A cookie change consists of a cookie and a type (either \"changed\" or \"deleted\").\n\nCookie changes that will cause the `CookieChangeEvent` to be dispatched are:\n\n- A cookie is newly created and not immediately removed. In this case `type` is \"changed\".\n- A cookie is newly created and immediately removed. In this case `type` is \"deleted\".\n- A cookie is removed. In this case `type` is \"deleted\".\n\n> **Note:** A cookie that is replaced due to the insertion of another cookie with the same name, domain, and path, is ignored and does not trigger a change event.\n\n", + "properties": { + "changed": "\n\nThe **`changed`** read-only property of the [CookieChangeEvent] interface returns an array of the cookies that have been changed.", + "deleted": "\n\nThe **`deleted`** read-only property of the [CookieChangeEvent] interface returns an array of the cookies that have been deleted by the given `CookieChangeEvent` instance." + } + }, + "cookiestore": { + "docs": "\n\nThe **`CookieStore`** interface of the [Cookie Store API] provides methods for getting and setting cookies asynchronously from either a page or a service worker.\n\nThe `CookieStore` is accessed via attributes in the global scope in a [Window] or [ServiceWorkerGlobalScope] context. Therefore there is no constructor.\n\n", + "properties": { + "change_event": "\n\nA `change` event is fired at a [CookieStore] object when a change is made to any cookie.", + "delete": "\n\nThe **`delete()`** method of the [CookieStore] interface deletes a cookie with the given name or options object. The `delete()` method expires the cookie by changing the date to one in the past.\n\n", + "get": "\n\nThe **`get()`** method of the [CookieStore] interface returns a single cookie with the given name or options object. The method will return the first matching cookie for the passed parameters.\n\n", + "getall": "\n\nThe **`getAll()`** method of the [CookieStore] interface returns a list of cookies that match the name or options passed to it. Passing no parameters will return all cookies for the current context.\n\n", + "set": "\n\nThe **`set()`** method of the [CookieStore] interface sets a cookie with the given name and value or options object.\n\n" + } + }, + "cookiestoremanager": { + "docs": "\n\nThe **`CookieStoreManager`** interface of the [Cookie Store API] allows service workers to subscribe to cookie change events. Call [CookieStoreManager.subscribe] on a particular service worker registration to receive change events.\n\nA `CookieStoreManager` has an associated [ServiceWorkerRegistration]. Each service worker registration has a cookie change subscription list, which is a list of cookie change subscriptions each containing a name and URL. The methods in this interface allow the service worker to add and remove subscriptions from this list, and to get a list of all subscriptions.\n\nTo get a `CookieStoreManager`, call [ServiceWorkerRegistration.cookies].\n\n", + "properties": { + "getsubscriptions": "\n\nThe **`getSubscriptions()`** method of the [CookieStoreManager] interface returns a list of all the cookie change subscriptions for this [ServiceWorkerRegistration].\n\n", + "subscribe": "\n\nThe **`subscribe()`** method of the [CookieStoreManager] interface subscribes a [ServiceWorkerRegistration] to cookie change events.\n\n", + "unsubscribe": "\n\nThe **`unsubscribe()`** method of the [CookieStoreManager] interface stops the [ServiceWorkerRegistration] from receiving previously subscribed events.\n\n" + } + }, + "countqueuingstrategy": { + "docs": "\n\nThe **`CountQueuingStrategy`** interface of the [Streams API](/en-US/docs/Web/API/Streams_API) provides a built-in chunk counting queuing strategy that can be used when constructing streams.", + "properties": { + "highwatermark": "\n\nThe read-only **`CountQueuingStrategy.highWaterMark`** property returns the total number of chunks that can be contained in the internal queue before backpressure is applied.", + "size": "\n\nThe **`size()`** method of the\n[CountQueuingStrategy] interface always returns `1`, so that the\ntotal queue size is a count of the number of chunks in the queue." + } + }, + "credential": { + "docs": "\n\nThe **`Credential`** interface of the [Credential Management API](/en-US/docs/Web/API/Credential_Management_API) provides information about an entity (usually a user) normally as a prerequisite to a trust decision.\n\n`Credential` objects may be of four different types:\n\n- [FederatedCredential]\n- [IdentityCredential]\n- [PasswordCredential]\n- [PublicKeyCredential]", + "properties": { + "id": "\n\nThe **`id`** property of the\n[Credential] interface returns a string containing the\ncredential's identifier. This might be any one of a GUID, username, or email\naddress.", + "type": "\n\nThe **`type`** property of the\n[Credential] interface returns a string containing the\ncredential's type. Valid values are `password`, `federated` and\n`public-key`." + } + }, + "credentialscontainer": { + "docs": "\n\nThe **`CredentialsContainer`** interface of the [Credential Management API](/en-US/docs/Web/API/Credential_Management_API) exposes methods to request credentials and notify the user agent when events such as successful sign in or sign out happen. This interface is accessible from [Navigator.credentials].", + "properties": { + "create": "\n\nThe **`create()`** method of the [CredentialsContainer] interface returns a `Promise` that resolves with a new credential instance based on the provided options, the information from which can then be stored and later used to authenticate users via [CredentialsContainer.get].\n\nThis is used by multiple different credential-related APIs with significantly different purposes:\n\n- The [Credential Management API](/en-US/docs/Web/API/Credential_Management_API) uses `create()` to create basic federated credentials or username/password credentials.\n- The [Web Authentication API](/en-US/docs/Web/API/Web_Authentication_API) uses `create()` to create public key credentials (based on asymmetric cryptography).\n\nThe below reference page starts with a syntax section that explains the general method call structure and parameters that apply to all the different APIs. After that, it is split into separate sections providing parameters, return values, and examples specific to each API.\n\n> **Note:** This method is restricted to top-level (i.e., a document running directly inside a browser tab, and not embedded inside another document). Calls to it from within an `